Test Report: Docker_macOS 18773

                    
                      30a9d8153d68792af1ccb4545db3a1a834f0d1ba:2024-04-29:34253
                    
                

Test fail (22/203)

x
+
TestOffline (755.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-641000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-641000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m34.337075479s)

                                                
                                                
-- stdout --
	* [offline-docker-641000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-641000" primary control-plane node in "offline-docker-641000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-641000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:42:47.469756   33518 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:42:47.469961   33518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:42:47.469967   33518 out.go:304] Setting ErrFile to fd 2...
	I0429 07:42:47.469971   33518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:42:47.470151   33518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:42:47.471673   33518 out.go:298] Setting JSON to false
	I0429 07:42:47.494748   33518 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":20541,"bootTime":1714381226,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 07:42:47.494842   33518 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 07:42:47.515911   33518 out.go:177] * [offline-docker-641000] minikube v1.33.0 on Darwin 14.4.1
	I0429 07:42:47.557626   33518 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 07:42:47.557634   33518 notify.go:220] Checking for updates...
	I0429 07:42:47.578555   33518 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 07:42:47.599452   33518 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 07:42:47.620557   33518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 07:42:47.641610   33518 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	I0429 07:42:47.662406   33518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 07:42:47.683794   33518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 07:42:47.738501   33518 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 07:42:47.738742   33518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:42:47.847277   33518 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:false NGoroutines:185 SystemTime:2024-04-29 14:42:47.836386074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:42:47.889699   33518 out.go:177] * Using the docker driver based on user configuration
	I0429 07:42:47.910535   33518 start.go:297] selected driver: docker
	I0429 07:42:47.910574   33518 start.go:901] validating driver "docker" against <nil>
	I0429 07:42:47.910592   33518 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 07:42:47.915097   33518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:42:48.025158   33518 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:false NGoroutines:185 SystemTime:2024-04-29 14:42:48.014172577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:42:48.025339   33518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 07:42:48.025524   33518 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 07:42:48.046443   33518 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 07:42:48.067786   33518 cni.go:84] Creating CNI manager for ""
	I0429 07:42:48.067843   33518 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 07:42:48.067862   33518 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 07:42:48.067943   33518 start.go:340] cluster config:
	{Name:offline-docker-641000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 07:42:48.089425   33518 out.go:177] * Starting "offline-docker-641000" primary control-plane node in "offline-docker-641000" cluster
	I0429 07:42:48.152518   33518 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 07:42:48.194731   33518 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 07:42:48.236618   33518 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:42:48.236656   33518 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 07:42:48.236722   33518 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 07:42:48.236755   33518 cache.go:56] Caching tarball of preloaded images
	I0429 07:42:48.236985   33518 preload.go:173] Found /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 07:42:48.237009   33518 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 07:42:48.238512   33518 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/offline-docker-641000/config.json ...
	I0429 07:42:48.238652   33518 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/offline-docker-641000/config.json: {Name:mk92fa631f496339dc1e399b5bdf7a17cf0b0558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 07:42:48.289316   33518 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 07:42:48.289333   33518 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 07:42:48.289351   33518 cache.go:194] Successfully downloaded all kic artifacts
	I0429 07:42:48.289394   33518 start.go:360] acquireMachinesLock for offline-docker-641000: {Name:mk7871f42d8e6acd5850dac5f3f3567af3b72577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:42:48.289557   33518 start.go:364] duration metric: took 150.89µs to acquireMachinesLock for "offline-docker-641000"
	I0429 07:42:48.289586   33518 start.go:93] Provisioning new machine with config: &{Name:offline-docker-641000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-641000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 07:42:48.289804   33518 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:42:48.331554   33518 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 07:42:48.331948   33518 start.go:159] libmachine.API.Create for "offline-docker-641000" (driver="docker")
	I0429 07:42:48.332000   33518 client.go:168] LocalClient.Create starting
	I0429 07:42:48.332223   33518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:42:48.332327   33518 main.go:141] libmachine: Decoding PEM data...
	I0429 07:42:48.332360   33518 main.go:141] libmachine: Parsing certificate...
	I0429 07:42:48.332505   33518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:42:48.332579   33518 main.go:141] libmachine: Decoding PEM data...
	I0429 07:42:48.332595   33518 main.go:141] libmachine: Parsing certificate...
	I0429 07:42:48.370868   33518 cli_runner.go:164] Run: docker network inspect offline-docker-641000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:42:48.420782   33518 cli_runner.go:211] docker network inspect offline-docker-641000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:42:48.420884   33518 network_create.go:281] running [docker network inspect offline-docker-641000] to gather additional debugging logs...
	I0429 07:42:48.420903   33518 cli_runner.go:164] Run: docker network inspect offline-docker-641000
	W0429 07:42:48.470376   33518 cli_runner.go:211] docker network inspect offline-docker-641000 returned with exit code 1
	I0429 07:42:48.470405   33518 network_create.go:284] error running [docker network inspect offline-docker-641000]: docker network inspect offline-docker-641000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-641000 not found
	I0429 07:42:48.470418   33518 network_create.go:286] output of [docker network inspect offline-docker-641000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-641000 not found
	
	** /stderr **
	I0429 07:42:48.470545   33518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:42:48.564099   33518 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:42:48.566073   33518 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:42:48.566693   33518 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00217c590}
	I0429 07:42:48.566728   33518 network_create.go:124] attempt to create docker network offline-docker-641000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 07:42:48.566842   33518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-641000 offline-docker-641000
	I0429 07:42:48.653752   33518 network_create.go:108] docker network offline-docker-641000 192.168.67.0/24 created
	I0429 07:42:48.653797   33518 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-641000" container
	I0429 07:42:48.653914   33518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:42:48.756384   33518 cli_runner.go:164] Run: docker volume create offline-docker-641000 --label name.minikube.sigs.k8s.io=offline-docker-641000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:42:48.806657   33518 oci.go:103] Successfully created a docker volume offline-docker-641000
	I0429 07:42:48.806771   33518 cli_runner.go:164] Run: docker run --rm --name offline-docker-641000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-641000 --entrypoint /usr/bin/test -v offline-docker-641000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:42:49.125685   33518 oci.go:107] Successfully prepared a docker volume offline-docker-641000
	I0429 07:42:49.125724   33518 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:42:49.125736   33518 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:42:49.125846   33518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-641000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 07:48:48.336700   33518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:48:48.336840   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:48:48.388441   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:48:48.388576   33518 retry.go:31] will retry after 209.809053ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:48.600777   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:48:48.652405   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:48:48.652518   33518 retry.go:31] will retry after 353.66265ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:49.007114   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:48:49.057831   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:48:49.057932   33518 retry.go:31] will retry after 817.068669ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:49.876507   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:48:49.927711   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	W0429 07:48:49.927817   33518 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	
	W0429 07:48:49.927837   33518 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:49.927897   33518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:48:49.927948   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:48:49.976216   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:48:49.976317   33518 retry.go:31] will retry after 250.015115ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:50.228095   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:48:50.279932   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:48:50.280036   33518 retry.go:31] will retry after 420.503672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:50.701974   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:48:50.753998   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:48:50.754101   33518 retry.go:31] will retry after 285.648658ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:51.040862   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:48:51.091353   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:48:51.091450   33518 retry.go:31] will retry after 465.748567ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:51.558875   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:48:51.610021   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	W0429 07:48:51.610138   33518 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	
	W0429 07:48:51.610157   33518 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:51.610182   33518 start.go:128] duration metric: took 6m3.318028777s to createHost
	I0429 07:48:51.610189   33518 start.go:83] releasing machines lock for "offline-docker-641000", held for 6m3.318291171s
	W0429 07:48:51.610205   33518 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 07:48:51.610633   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:48:51.658138   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:48:51.658200   33518 delete.go:82] Unable to get host status for offline-docker-641000, assuming it has already been deleted: state: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	W0429 07:48:51.658283   33518 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 07:48:51.658292   33518 start.go:728] Will try again in 5 seconds ...
	I0429 07:48:56.660546   33518 start.go:360] acquireMachinesLock for offline-docker-641000: {Name:mk7871f42d8e6acd5850dac5f3f3567af3b72577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:48:56.660765   33518 start.go:364] duration metric: took 172.052µs to acquireMachinesLock for "offline-docker-641000"
	I0429 07:48:56.660803   33518 start.go:96] Skipping create...Using existing machine configuration
	I0429 07:48:56.660823   33518 fix.go:54] fixHost starting: 
	I0429 07:48:56.661224   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:48:56.712230   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:48:56.712283   33518 fix.go:112] recreateIfNeeded on offline-docker-641000: state= err=unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:56.712305   33518 fix.go:117] machineExists: false. err=machine does not exist
	I0429 07:48:56.733865   33518 out.go:177] * docker "offline-docker-641000" container is missing, will recreate.
	I0429 07:48:56.754626   33518 delete.go:124] DEMOLISHING offline-docker-641000 ...
	I0429 07:48:56.754844   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:48:56.804288   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	W0429 07:48:56.804345   33518 stop.go:83] unable to get state: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:56.804362   33518 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:56.804736   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:48:56.852880   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:48:56.852938   33518 delete.go:82] Unable to get host status for offline-docker-641000, assuming it has already been deleted: state: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:56.853012   33518 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-641000
	W0429 07:48:56.901615   33518 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-641000 returned with exit code 1
	I0429 07:48:56.901658   33518 kic.go:371] could not find the container offline-docker-641000 to remove it. will try anyways
	I0429 07:48:56.901743   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:48:56.949700   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	W0429 07:48:56.949748   33518 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:56.949824   33518 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-641000 /bin/bash -c "sudo init 0"
	W0429 07:48:56.997880   33518 cli_runner.go:211] docker exec --privileged -t offline-docker-641000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 07:48:56.997917   33518 oci.go:650] error shutdown offline-docker-641000: docker exec --privileged -t offline-docker-641000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:57.998401   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:48:58.049644   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:48:58.049694   33518 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:58.049708   33518 oci.go:664] temporary error: container offline-docker-641000 status is  but expect it to be exited
	I0429 07:48:58.049731   33518 retry.go:31] will retry after 622.474278ms: couldn't verify container is exited. %v: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:58.674547   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:48:58.724668   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:48:58.724716   33518 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:58.724731   33518 oci.go:664] temporary error: container offline-docker-641000 status is  but expect it to be exited
	I0429 07:48:58.724764   33518 retry.go:31] will retry after 907.513345ms: couldn't verify container is exited. %v: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:59.634079   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:48:59.687148   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:48:59.687207   33518 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:48:59.687221   33518 oci.go:664] temporary error: container offline-docker-641000 status is  but expect it to be exited
	I0429 07:48:59.687245   33518 retry.go:31] will retry after 1.086522817s: couldn't verify container is exited. %v: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:00.774742   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:49:00.826461   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:00.826504   33518 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:00.826515   33518 oci.go:664] temporary error: container offline-docker-641000 status is  but expect it to be exited
	I0429 07:49:00.826542   33518 retry.go:31] will retry after 1.121731901s: couldn't verify container is exited. %v: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:01.950729   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:49:02.002364   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:02.002420   33518 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:02.002429   33518 oci.go:664] temporary error: container offline-docker-641000 status is  but expect it to be exited
	I0429 07:49:02.002455   33518 retry.go:31] will retry after 3.370322611s: couldn't verify container is exited. %v: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:05.375145   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:49:05.425337   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:05.425395   33518 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:05.425408   33518 oci.go:664] temporary error: container offline-docker-641000 status is  but expect it to be exited
	I0429 07:49:05.425436   33518 retry.go:31] will retry after 2.64927544s: couldn't verify container is exited. %v: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:08.075045   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:49:08.124839   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:08.124887   33518 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:08.124897   33518 oci.go:664] temporary error: container offline-docker-641000 status is  but expect it to be exited
	I0429 07:49:08.124920   33518 retry.go:31] will retry after 5.324353095s: couldn't verify container is exited. %v: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:13.450104   33518 cli_runner.go:164] Run: docker container inspect offline-docker-641000 --format={{.State.Status}}
	W0429 07:49:13.502895   33518 cli_runner.go:211] docker container inspect offline-docker-641000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:13.502942   33518 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:49:13.502969   33518 oci.go:664] temporary error: container offline-docker-641000 status is  but expect it to be exited
	I0429 07:49:13.503003   33518 oci.go:88] couldn't shut down offline-docker-641000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	 
	I0429 07:49:13.503077   33518 cli_runner.go:164] Run: docker rm -f -v offline-docker-641000
	I0429 07:49:13.551217   33518 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-641000
	W0429 07:49:13.598134   33518 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-641000 returned with exit code 1
	I0429 07:49:13.598250   33518 cli_runner.go:164] Run: docker network inspect offline-docker-641000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:49:13.646890   33518 cli_runner.go:164] Run: docker network rm offline-docker-641000
	I0429 07:49:13.755083   33518 fix.go:124] Sleeping 1 second for extra luck!
	I0429 07:49:14.755703   33518 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:49:14.778663   33518 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 07:49:14.778850   33518 start.go:159] libmachine.API.Create for "offline-docker-641000" (driver="docker")
	I0429 07:49:14.778879   33518 client.go:168] LocalClient.Create starting
	I0429 07:49:14.779105   33518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:49:14.779213   33518 main.go:141] libmachine: Decoding PEM data...
	I0429 07:49:14.779237   33518 main.go:141] libmachine: Parsing certificate...
	I0429 07:49:14.779316   33518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:49:14.779392   33518 main.go:141] libmachine: Decoding PEM data...
	I0429 07:49:14.779407   33518 main.go:141] libmachine: Parsing certificate...
	I0429 07:49:14.799806   33518 cli_runner.go:164] Run: docker network inspect offline-docker-641000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:49:14.850319   33518 cli_runner.go:211] docker network inspect offline-docker-641000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:49:14.850428   33518 network_create.go:281] running [docker network inspect offline-docker-641000] to gather additional debugging logs...
	I0429 07:49:14.850443   33518 cli_runner.go:164] Run: docker network inspect offline-docker-641000
	W0429 07:49:14.900742   33518 cli_runner.go:211] docker network inspect offline-docker-641000 returned with exit code 1
	I0429 07:49:14.900771   33518 network_create.go:284] error running [docker network inspect offline-docker-641000]: docker network inspect offline-docker-641000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-641000 not found
	I0429 07:49:14.900787   33518 network_create.go:286] output of [docker network inspect offline-docker-641000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-641000 not found
	
	** /stderr **
	I0429 07:49:14.900935   33518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:49:14.950913   33518 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:14.952472   33518 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:14.953908   33518 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:14.955458   33518 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:14.956770   33518 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:14.957147   33518 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00070e3d0}
	I0429 07:49:14.957167   33518 network_create.go:124] attempt to create docker network offline-docker-641000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0429 07:49:14.957241   33518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-641000 offline-docker-641000
	I0429 07:49:15.041565   33518 network_create.go:108] docker network offline-docker-641000 192.168.94.0/24 created
	I0429 07:49:15.041603   33518 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-641000" container
	I0429 07:49:15.041702   33518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:49:15.090913   33518 cli_runner.go:164] Run: docker volume create offline-docker-641000 --label name.minikube.sigs.k8s.io=offline-docker-641000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:49:15.139200   33518 oci.go:103] Successfully created a docker volume offline-docker-641000
	I0429 07:49:15.139310   33518 cli_runner.go:164] Run: docker run --rm --name offline-docker-641000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-641000 --entrypoint /usr/bin/test -v offline-docker-641000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:49:15.375864   33518 oci.go:107] Successfully prepared a docker volume offline-docker-641000
	I0429 07:49:15.375893   33518 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:49:15.375905   33518 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:49:15.376018   33518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-641000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 07:55:14.855764   33518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:55:14.855865   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:14.908121   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:14.908244   33518 retry.go:31] will retry after 353.67759ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:15.262979   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:15.313235   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:15.313351   33518 retry.go:31] will retry after 532.77234ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:15.848118   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:15.898697   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:15.898800   33518 retry.go:31] will retry after 721.33616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:16.621075   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:16.672535   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	W0429 07:55:16.672644   33518 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	
	W0429 07:55:16.672667   33518 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:16.672726   33518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:55:16.672778   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:16.721308   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:16.721418   33518 retry.go:31] will retry after 349.949756ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:17.073320   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:17.125653   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:17.125756   33518 retry.go:31] will retry after 444.608636ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:17.571719   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:17.623949   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:17.624052   33518 retry.go:31] will retry after 290.563993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:17.916983   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:17.969971   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:17.970072   33518 retry.go:31] will retry after 556.253312ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:18.528688   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:18.580884   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	W0429 07:55:18.580994   33518 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	
	W0429 07:55:18.581010   33518 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:18.581021   33518 start.go:128] duration metric: took 6m3.748716457s to createHost
	I0429 07:55:18.581094   33518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:55:18.581150   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:18.629134   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:18.629229   33518 retry.go:31] will retry after 297.300673ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:18.928940   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:18.981650   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:18.981747   33518 retry.go:31] will retry after 392.959843ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:19.376640   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:19.430477   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:19.430568   33518 retry.go:31] will retry after 588.452747ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:20.021440   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:20.075823   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	W0429 07:55:20.075926   33518 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	
	W0429 07:55:20.075945   33518 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:20.076005   33518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:55:20.076067   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:20.123748   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:20.123842   33518 retry.go:31] will retry after 193.273596ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:20.319486   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:20.371662   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:20.371758   33518 retry.go:31] will retry after 358.56885ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:20.731925   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:20.785235   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	I0429 07:55:20.785335   33518 retry.go:31] will retry after 817.328235ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:21.604218   33518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000
	W0429 07:55:21.654281   33518 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000 returned with exit code 1
	W0429 07:55:21.654390   33518 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	
	W0429 07:55:21.654409   33518 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-641000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-641000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000
	I0429 07:55:21.654418   33518 fix.go:56] duration metric: took 6m24.916908961s for fixHost
	I0429 07:55:21.654424   33518 start.go:83] releasing machines lock for "offline-docker-641000", held for 6m24.916955636s
	W0429 07:55:21.654501   33518 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-641000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-641000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 07:55:21.696457   33518 out.go:177] 
	W0429 07:55:21.717829   33518 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 07:55:21.717883   33518 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 07:55:21.717908   33518 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 07:55:21.738631   33518 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-641000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-04-29 07:55:21.835076 -0700 PDT m=+6152.770838476
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-641000
helpers_test.go:235: (dbg) docker inspect offline-docker-641000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-641000",
	        "Id": "6f0af58da24ea7d3df7287e21127ed40e053d91b8738334e169d6a71ba952044",
	        "Created": "2024-04-29T14:49:15.002313804Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-641000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-641000 -n offline-docker-641000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-641000 -n offline-docker-641000: exit status 7 (112.481356ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:55:21.999579   34557 status.go:249] status error: host: state: unknown state "offline-docker-641000": docker container inspect offline-docker-641000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-641000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-641000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-641000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-641000
--- FAIL: TestOffline (755.24s)

                                                
                                    
x
+
TestCertOptions (7201.471s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-799000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0429 08:09:05.563895   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 08:09:06.973870   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 08:09:22.509565   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (4m51s)
	TestCertOptions (4m15s)
	TestNetworkPlugins (30m2s)

                                                
                                                
goroutine 2459 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0009b2d00, 0xc001387bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc00084c4e0, {0x11362fc0, 0x2a, 0x2a}, {0xceb4aa5?, 0xe9eae19?, 0x11385d80?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000486820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000486820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006d1b80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2155 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020dc9c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020dc9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0020dc9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc0020dc9c0, 0xffd1580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 564 [syscall, 4 minutes]:
syscall.syscall6(0xc00289ff80?, 0x1000000000010?, 0x10000000019?, 0x58e05af8?, 0x90?, 0x11c9f108?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0013b5a40?, 0xcdf50a5?, 0x90?, 0xff3e140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xcf25c45?, 0xc0013b5a74, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0027b65a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0026c0420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0026c0420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0020f4820, 0xc0026c0420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0020f4820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc0020f4820, 0xffd1470)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 38 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 37
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 175 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00137c5c0, 0xc0006cc000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 563 [syscall, 4 minutes]:
syscall.syscall6(0xc00289ff80?, 0x1000000000010?, 0x10000000019?, 0x58e05af8?, 0x90?, 0x11c9f5b8?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0012e78a0?, 0xcdf50a5?, 0x90?, 0xff3e140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xcf25c45?, 0xc0012e78d4, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0027b6420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0026c02c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0026c02c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0020f4680, 0xc0026c02c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0020f4680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0020f4680, 0xffd1478)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2166 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00230cea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00230cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00230cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00230cea0, 0xc0026fe500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 174 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0012bea20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 857 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 856
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 194 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00137c510, 0x2c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xfacb3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0012be900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00137c5c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006b1440, {0xffdd760, 0xc0009f86f0}, 0x1, 0xc0006cc000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b1440, 0x3b9aca00, 0x0, 0x1, 0xc0006cc000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 175
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 195 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x10001240, 0xc0006cc000}, 0xc000781f50, 0xc002876f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x10001240, 0xc0006cc000}, 0x0?, 0xc000781f50, 0xc000781f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x10001240?, 0xc0006cc000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 175
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 196 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 195
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 856 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x10001240, 0xc0006cc000}, 0xc000780750, 0xc0013a9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x10001240, 0xc0006cc000}, 0x20?, 0xc000780750, 0xc000780798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x10001240?, 0xc0006cc000?}, 0xc0007807b0?, 0xd37b858?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0007807d0?, 0xcf6ec04?, 0xc0013bca20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 888
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2048 [chan receive, 30 minutes]:
testing.(*T).Run(0xc0020dcb60, {0xe9918e7?, 0x11c0be9c3c07?}, 0xc002470018)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0020dcb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0020dcb60, 0xffd1558)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2165 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00230cd00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00230cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00230cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00230cd00, 0xc0026fe480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2143 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00230c340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00230c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00230c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00230c340, 0xc0026fe180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2458 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026c02c0, 0xc002686540)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 563
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2164 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00230cb60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00230cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00230cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00230cb60, 0xc0026fe400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2163 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00230c9c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00230c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00230c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00230c9c0, 0xc0026fe380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2457 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x58c5cc58, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00276c8a0?, 0xc0020f7000?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00276c8a0, {0xc0020f7000, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002140200, {0xc0020f7000?, 0x9?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00289e510, {0xffdc178, 0xc0013844c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xffdc2b8, 0xc00289e510}, {0xffdc178, 0xc0013844c0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000117678?, {0xffdc2b8, 0xc00289e510})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000117738?, {0xffdc2b8?, 0xc00289e510?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xffdc2b8, 0xc00289e510}, {0xffdc238, 0xc002140200}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00275a3c0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 563
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2144 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00230c4e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00230c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00230c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00230c4e0, 0xc0026fe200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2456 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x58e38d20, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00276c7e0?, 0xc002442291?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00276c7e0, {0xc002442291, 0x56f, 0x56f})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0021401a0, {0xc002442291?, 0xc0023d7dc0?, 0x227?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00289e4b0, {0xffdc178, 0xc0013844b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xffdc2b8, 0xc00289e4b0}, {0xffdc178, 0xc0013844b0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000115e78?, {0xffdc2b8, 0xc00289e4b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000115f38?, {0xffdc2b8?, 0xc00289e4b0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xffdc2b8, 0xc00289e4b0}, {0xffdc238, 0xc0021401a0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00275a660?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 563
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2082 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020dcea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020dcea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0020dcea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0020dcea0, 0xffd1570)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1758 [syscall, 97 minutes]:
syscall.syscall(0x0?, 0xc0027f8888?, 0xce9cf05?, 0xc000110eb0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc000110ef0?, 0xc0026641c0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1753
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2432 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026c0420, 0xc00275a840)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 564
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2158 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020dd380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020dd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0020dd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc0020dd380, 0xffd1538)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 651 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x58c5d700, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00082c000?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00082c000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00082c000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00277e4a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00277e4a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0009fe0f0, {0xfff40f0, 0xc00277e4a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0009fe0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xcf6ec04?, 0xc000680820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 648
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2081 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020dcd00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020dcd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0020dcd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0020dcd00, 0xffd1560)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1205 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002139a20, 0xc0006ccd20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 753
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 887 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013d9b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 766
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2161 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00230c680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00230c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00230c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00230c680, 0xc0026fe280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1297 [select, 107 minutes]:
net/http.(*persistConn).writeLoop(0xc00242a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1284
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2162 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00230c820)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00230c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00230c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00230c820, 0xc0026fe300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2147 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020dc000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020dc000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0020dc000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0020dc000, 0xffd15a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 855 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc002849bd0, 0x2b)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xfacb3a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013d9a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002849c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009f0820, {0xffdd760, 0xc002123c20}, 0x1, 0xc0006cc000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009f0820, 0x3b9aca00, 0x0, 0x1, 0xc0006cc000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 888
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2157 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020dd1e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020dd1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0020dd1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc0020dd1e0, 0xffd1520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 888 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002849c00, 0xc0006cc000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 766
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1264 [select, 107 minutes]:
net/http.(*persistConn).readLoop(0xc00242a000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1284
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2141 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00230c000, 0xc002470018)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2048
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1175 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0020fe840, 0xc0013bc420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1174
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2142 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00230c1a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00230c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00230c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00230c1a0, 0xc0026fe000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2430 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x58c5d038, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0028448a0?, 0xc0027d829a?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0028448a0, {0xc0027d829a, 0x566, 0x566})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0021401e0, {0xc0027d829a?, 0xc002488fc0?, 0x230?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00289e6c0, {0xffdc178, 0xc001384498})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xffdc2b8, 0xc00289e6c0}, {0xffdc178, 0xc001384498}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc00077de78?, {0xffdc2b8, 0xc00289e6c0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00077df38?, {0xffdc2b8?, 0xc00289e6c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xffdc2b8, 0xc00289e6c0}, {0xffdc238, 0xc0021401e0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00275a780?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 564
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2431 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x58c5d418, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002844960?, 0xc0020f6400?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002844960, {0xc0020f6400, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc002140210, {0xc0020f6400?, 0xb?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00289e7b0, {0xffdc178, 0xc0013844a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xffdc2b8, 0xc00289e7b0}, {0xffdc178, 0xc0013844a0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x11297860?, {0xffdc2b8, 0xc00289e7b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0xffdc2b8?, 0xc00289e7b0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0xffdc2b8, 0xc00289e7b0}, {0xffdc238, 0xc002140210}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026fe000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 564
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1216 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc00239cc60, 0xc0006cd980)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1215
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2156 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0006dc780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020dd040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020dd040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0020dd040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0020dd040, 0xffd15a8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1031 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026c1340, 0xc00268cea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1030
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                    
x
+
TestDockerFlags (758.51s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-413000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0429 07:59:06.971250   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:59:22.505936   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 08:03:50.110073   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 08:04:06.971906   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 08:04:22.508650   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-413000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m37.216654712s)

                                                
                                                
-- stdout --
	* [docker-flags-413000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-413000" primary control-plane node in "docker-flags-413000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-413000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:55:55.247084   34742 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:55:55.247344   34742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:55:55.247349   34742 out.go:304] Setting ErrFile to fd 2...
	I0429 07:55:55.247353   34742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:55:55.247519   34742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:55:55.249098   34742 out.go:298] Setting JSON to false
	I0429 07:55:55.272270   34742 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":21329,"bootTime":1714381226,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 07:55:55.272362   34742 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 07:55:55.295098   34742 out.go:177] * [docker-flags-413000] minikube v1.33.0 on Darwin 14.4.1
	I0429 07:55:55.337337   34742 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 07:55:55.337399   34742 notify.go:220] Checking for updates...
	I0429 07:55:55.359084   34742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 07:55:55.380265   34742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 07:55:55.422038   34742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 07:55:55.464236   34742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	I0429 07:55:55.508221   34742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 07:55:55.530131   34742 config.go:182] Loaded profile config "force-systemd-flag-789000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:55:55.530281   34742 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 07:55:55.585550   34742 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 07:55:55.585722   34742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:55:55.696237   34742 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-29 14:55:55.684592681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:55:55.738953   34742 out.go:177] * Using the docker driver based on user configuration
	I0429 07:55:55.759910   34742 start.go:297] selected driver: docker
	I0429 07:55:55.759938   34742 start.go:901] validating driver "docker" against <nil>
	I0429 07:55:55.759953   34742 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 07:55:55.764329   34742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:55:55.876444   34742 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-29 14:55:55.865689279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:55:55.876644   34742 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 07:55:55.876824   34742 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0429 07:55:55.899938   34742 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 07:55:55.920796   34742 cni.go:84] Creating CNI manager for ""
	I0429 07:55:55.920832   34742 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 07:55:55.920845   34742 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 07:55:55.920934   34742 start.go:340] cluster config:
	{Name:docker-flags-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-413000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 07:55:55.941868   34742 out.go:177] * Starting "docker-flags-413000" primary control-plane node in "docker-flags-413000" cluster
	I0429 07:55:55.983648   34742 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 07:55:56.004821   34742 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 07:55:56.046793   34742 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:55:56.046844   34742 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 07:55:56.046867   34742 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 07:55:56.046885   34742 cache.go:56] Caching tarball of preloaded images
	I0429 07:55:56.047091   34742 preload.go:173] Found /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 07:55:56.047112   34742 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 07:55:56.047270   34742 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/docker-flags-413000/config.json ...
	I0429 07:55:56.048050   34742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/docker-flags-413000/config.json: {Name:mk29dc7a2911ee41af5707d519f8e5596d05f083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 07:55:56.098673   34742 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 07:55:56.098691   34742 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 07:55:56.098712   34742 cache.go:194] Successfully downloaded all kic artifacts
	I0429 07:55:56.098763   34742 start.go:360] acquireMachinesLock for docker-flags-413000: {Name:mk39fd2037ada840d026a88b671324a66ce96339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:55:56.098921   34742 start.go:364] duration metric: took 147.03µs to acquireMachinesLock for "docker-flags-413000"
	I0429 07:55:56.098949   34742 start.go:93] Provisioning new machine with config: &{Name:docker-flags-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-413000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 07:55:56.099019   34742 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:55:56.141863   34742 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 07:55:56.142198   34742 start.go:159] libmachine.API.Create for "docker-flags-413000" (driver="docker")
	I0429 07:55:56.142245   34742 client.go:168] LocalClient.Create starting
	I0429 07:55:56.142447   34742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:55:56.142540   34742 main.go:141] libmachine: Decoding PEM data...
	I0429 07:55:56.142572   34742 main.go:141] libmachine: Parsing certificate...
	I0429 07:55:56.142658   34742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:55:56.142732   34742 main.go:141] libmachine: Decoding PEM data...
	I0429 07:55:56.142748   34742 main.go:141] libmachine: Parsing certificate...
	I0429 07:55:56.143553   34742 cli_runner.go:164] Run: docker network inspect docker-flags-413000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:55:56.193079   34742 cli_runner.go:211] docker network inspect docker-flags-413000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:55:56.193188   34742 network_create.go:281] running [docker network inspect docker-flags-413000] to gather additional debugging logs...
	I0429 07:55:56.193213   34742 cli_runner.go:164] Run: docker network inspect docker-flags-413000
	W0429 07:55:56.240959   34742 cli_runner.go:211] docker network inspect docker-flags-413000 returned with exit code 1
	I0429 07:55:56.240988   34742 network_create.go:284] error running [docker network inspect docker-flags-413000]: docker network inspect docker-flags-413000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-413000 not found
	I0429 07:55:56.241005   34742 network_create.go:286] output of [docker network inspect docker-flags-413000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-413000 not found
	
	** /stderr **
	I0429 07:55:56.241116   34742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:55:56.291096   34742 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:55:56.292466   34742 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:55:56.293862   34742 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:55:56.294349   34742 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00233d400}
	I0429 07:55:56.294376   34742 network_create.go:124] attempt to create docker network docker-flags-413000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0429 07:55:56.294457   34742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-413000 docker-flags-413000
	W0429 07:55:56.343676   34742 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-413000 docker-flags-413000 returned with exit code 1
	W0429 07:55:56.343710   34742 network_create.go:149] failed to create docker network docker-flags-413000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-413000 docker-flags-413000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0429 07:55:56.343730   34742 network_create.go:116] failed to create docker network docker-flags-413000 192.168.76.0/24, will retry: subnet is taken
	I0429 07:55:56.345301   34742 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:55:56.345642   34742 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002502ab0}
	I0429 07:55:56.345654   34742 network_create.go:124] attempt to create docker network docker-flags-413000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0429 07:55:56.345727   34742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-413000 docker-flags-413000
	I0429 07:55:56.471993   34742 network_create.go:108] docker network docker-flags-413000 192.168.85.0/24 created
	I0429 07:55:56.472037   34742 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-413000" container
	I0429 07:55:56.472188   34742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:55:56.525407   34742 cli_runner.go:164] Run: docker volume create docker-flags-413000 --label name.minikube.sigs.k8s.io=docker-flags-413000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:55:56.575237   34742 oci.go:103] Successfully created a docker volume docker-flags-413000
	I0429 07:55:56.575351   34742 cli_runner.go:164] Run: docker run --rm --name docker-flags-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-413000 --entrypoint /usr/bin/test -v docker-flags-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:55:56.906804   34742 oci.go:107] Successfully prepared a docker volume docker-flags-413000
	I0429 07:55:56.906847   34742 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:55:56.906860   34742 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:55:56.906958   34742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-413000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 08:01:56.146690   34742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 08:01:56.146847   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:01:56.197368   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:01:56.197491   34742 retry.go:31] will retry after 290.829238ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:01:56.490683   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:01:56.541712   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:01:56.541804   34742 retry.go:31] will retry after 428.988719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:01:56.972063   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:01:57.025181   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:01:57.025279   34742 retry.go:31] will retry after 314.960752ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:01:57.342603   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:01:57.395189   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:01:57.395280   34742 retry.go:31] will retry after 702.834411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:01:58.100487   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:01:58.152281   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	W0429 08:01:58.152383   34742 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	
	W0429 08:01:58.152404   34742 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:01:58.152462   34742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 08:01:58.152514   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:01:58.201306   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:01:58.201410   34742 retry.go:31] will retry after 360.428361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:01:58.563140   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:01:58.614778   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:01:58.614871   34742 retry.go:31] will retry after 331.874097ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:01:58.947669   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:01:58.999318   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:01:58.999410   34742 retry.go:31] will retry after 785.468261ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:01:59.787281   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:01:59.841601   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	W0429 08:01:59.841697   34742 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	
	W0429 08:01:59.841720   34742 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:01:59.841738   34742 start.go:128] duration metric: took 6m3.740633114s to createHost
	I0429 08:01:59.841745   34742 start.go:83] releasing machines lock for "docker-flags-413000", held for 6m3.740741608s
	W0429 08:01:59.841763   34742 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 08:01:59.842201   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:01:59.891944   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:59.892011   34742 delete.go:82] Unable to get host status for docker-flags-413000, assuming it has already been deleted: state: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	W0429 08:01:59.892106   34742 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 08:01:59.892117   34742 start.go:728] Will try again in 5 seconds ...
	I0429 08:02:04.893675   34742 start.go:360] acquireMachinesLock for docker-flags-413000: {Name:mk39fd2037ada840d026a88b671324a66ce96339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 08:02:04.894006   34742 start.go:364] duration metric: took 164.669µs to acquireMachinesLock for "docker-flags-413000"
	I0429 08:02:04.894040   34742 start.go:96] Skipping create...Using existing machine configuration
	I0429 08:02:04.894060   34742 fix.go:54] fixHost starting: 
	I0429 08:02:04.894545   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:04.945883   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:04.945928   34742 fix.go:112] recreateIfNeeded on docker-flags-413000: state= err=unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:04.945947   34742 fix.go:117] machineExists: false. err=machine does not exist
	I0429 08:02:04.967682   34742 out.go:177] * docker "docker-flags-413000" container is missing, will recreate.
	I0429 08:02:04.989117   34742 delete.go:124] DEMOLISHING docker-flags-413000 ...
	I0429 08:02:04.989351   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:05.038923   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	W0429 08:02:05.038976   34742 stop.go:83] unable to get state: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:05.038995   34742 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:05.039368   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:05.087655   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:05.087721   34742 delete.go:82] Unable to get host status for docker-flags-413000, assuming it has already been deleted: state: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:05.087804   34742 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-413000
	W0429 08:02:05.136113   34742 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-413000 returned with exit code 1
	I0429 08:02:05.136154   34742 kic.go:371] could not find the container docker-flags-413000 to remove it. will try anyways
	I0429 08:02:05.136240   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:05.184437   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	W0429 08:02:05.184496   34742 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:05.184588   34742 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-413000 /bin/bash -c "sudo init 0"
	W0429 08:02:05.232655   34742 cli_runner.go:211] docker exec --privileged -t docker-flags-413000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 08:02:05.232689   34742 oci.go:650] error shutdown docker-flags-413000: docker exec --privileged -t docker-flags-413000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:06.235114   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:06.287543   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:06.287605   34742 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:06.287615   34742 oci.go:664] temporary error: container docker-flags-413000 status is  but expect it to be exited
	I0429 08:02:06.287637   34742 retry.go:31] will retry after 675.869153ms: couldn't verify container is exited. %v: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:06.965898   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:07.015144   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:07.015189   34742 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:07.015203   34742 oci.go:664] temporary error: container docker-flags-413000 status is  but expect it to be exited
	I0429 08:02:07.015228   34742 retry.go:31] will retry after 501.910317ms: couldn't verify container is exited. %v: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:07.519529   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:07.574101   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:07.574151   34742 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:07.574168   34742 oci.go:664] temporary error: container docker-flags-413000 status is  but expect it to be exited
	I0429 08:02:07.574192   34742 retry.go:31] will retry after 588.192244ms: couldn't verify container is exited. %v: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:08.163644   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:08.214590   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:08.214635   34742 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:08.214645   34742 oci.go:664] temporary error: container docker-flags-413000 status is  but expect it to be exited
	I0429 08:02:08.214670   34742 retry.go:31] will retry after 884.791643ms: couldn't verify container is exited. %v: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:09.099935   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:09.152864   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:09.152908   34742 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:09.152919   34742 oci.go:664] temporary error: container docker-flags-413000 status is  but expect it to be exited
	I0429 08:02:09.152944   34742 retry.go:31] will retry after 2.622157417s: couldn't verify container is exited. %v: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:11.777472   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:11.830615   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:11.830670   34742 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:11.830684   34742 oci.go:664] temporary error: container docker-flags-413000 status is  but expect it to be exited
	I0429 08:02:11.830706   34742 retry.go:31] will retry after 2.778256788s: couldn't verify container is exited. %v: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:14.611271   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:14.663386   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:14.663429   34742 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:14.663438   34742 oci.go:664] temporary error: container docker-flags-413000 status is  but expect it to be exited
	I0429 08:02:14.663460   34742 retry.go:31] will retry after 5.727464482s: couldn't verify container is exited. %v: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:20.391669   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:20.443786   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:20.443844   34742 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:20.443853   34742 oci.go:664] temporary error: container docker-flags-413000 status is  but expect it to be exited
	I0429 08:02:20.443874   34742 retry.go:31] will retry after 4.668763088s: couldn't verify container is exited. %v: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:25.115060   34742 cli_runner.go:164] Run: docker container inspect docker-flags-413000 --format={{.State.Status}}
	W0429 08:02:25.168698   34742 cli_runner.go:211] docker container inspect docker-flags-413000 --format={{.State.Status}} returned with exit code 1
	I0429 08:02:25.168747   34742 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:02:25.168758   34742 oci.go:664] temporary error: container docker-flags-413000 status is  but expect it to be exited
	I0429 08:02:25.168789   34742 oci.go:88] couldn't shut down docker-flags-413000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	 
	I0429 08:02:25.168866   34742 cli_runner.go:164] Run: docker rm -f -v docker-flags-413000
	I0429 08:02:25.216493   34742 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-413000
	W0429 08:02:25.280540   34742 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-413000 returned with exit code 1
	I0429 08:02:25.280649   34742 cli_runner.go:164] Run: docker network inspect docker-flags-413000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 08:02:25.330054   34742 cli_runner.go:164] Run: docker network rm docker-flags-413000
	I0429 08:02:25.438215   34742 fix.go:124] Sleeping 1 second for extra luck!
	I0429 08:02:26.440461   34742 start.go:125] createHost starting for "" (driver="docker")
	I0429 08:02:26.492719   34742 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 08:02:26.492931   34742 start.go:159] libmachine.API.Create for "docker-flags-413000" (driver="docker")
	I0429 08:02:26.492967   34742 client.go:168] LocalClient.Create starting
	I0429 08:02:26.493181   34742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 08:02:26.493284   34742 main.go:141] libmachine: Decoding PEM data...
	I0429 08:02:26.493309   34742 main.go:141] libmachine: Parsing certificate...
	I0429 08:02:26.493406   34742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 08:02:26.493482   34742 main.go:141] libmachine: Decoding PEM data...
	I0429 08:02:26.493498   34742 main.go:141] libmachine: Parsing certificate...
	I0429 08:02:26.514215   34742 cli_runner.go:164] Run: docker network inspect docker-flags-413000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 08:02:26.566926   34742 cli_runner.go:211] docker network inspect docker-flags-413000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 08:02:26.567016   34742 network_create.go:281] running [docker network inspect docker-flags-413000] to gather additional debugging logs...
	I0429 08:02:26.567036   34742 cli_runner.go:164] Run: docker network inspect docker-flags-413000
	W0429 08:02:26.615145   34742 cli_runner.go:211] docker network inspect docker-flags-413000 returned with exit code 1
	I0429 08:02:26.615177   34742 network_create.go:284] error running [docker network inspect docker-flags-413000]: docker network inspect docker-flags-413000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-413000 not found
	I0429 08:02:26.615189   34742 network_create.go:286] output of [docker network inspect docker-flags-413000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-413000 not found
	
	** /stderr **
	I0429 08:02:26.615339   34742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 08:02:26.666064   34742 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:02:26.667631   34742 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:02:26.669258   34742 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:02:26.670976   34742 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:02:26.672675   34742 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:02:26.674153   34742 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:02:26.674608   34742 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0009ff240}
	I0429 08:02:26.674626   34742 network_create.go:124] attempt to create docker network docker-flags-413000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0429 08:02:26.674713   34742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-413000 docker-flags-413000
	I0429 08:02:26.759203   34742 network_create.go:108] docker network docker-flags-413000 192.168.103.0/24 created
	I0429 08:02:26.759242   34742 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-413000" container
	I0429 08:02:26.759353   34742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 08:02:26.809555   34742 cli_runner.go:164] Run: docker volume create docker-flags-413000 --label name.minikube.sigs.k8s.io=docker-flags-413000 --label created_by.minikube.sigs.k8s.io=true
	I0429 08:02:26.858096   34742 oci.go:103] Successfully created a docker volume docker-flags-413000
	I0429 08:02:26.858222   34742 cli_runner.go:164] Run: docker run --rm --name docker-flags-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-413000 --entrypoint /usr/bin/test -v docker-flags-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 08:02:27.102280   34742 oci.go:107] Successfully prepared a docker volume docker-flags-413000
	I0429 08:02:27.102328   34742 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 08:02:27.102343   34742 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 08:02:27.102446   34742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-413000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 08:08:26.497360   34742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 08:08:26.497487   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:26.550327   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:26.550440   34742 retry.go:31] will retry after 172.533655ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:26.725362   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:26.779699   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:26.779796   34742 retry.go:31] will retry after 373.995593ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:27.156202   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:27.208055   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:27.208165   34742 retry.go:31] will retry after 328.800507ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:27.539371   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:27.591807   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	W0429 08:08:27.591918   34742 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	
	W0429 08:08:27.591941   34742 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:27.592001   34742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 08:08:27.592053   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:27.640628   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:27.640743   34742 retry.go:31] will retry after 133.331806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:27.775052   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:27.827322   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:27.827424   34742 retry.go:31] will retry after 356.746852ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:28.185143   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:28.237489   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:28.237596   34742 retry.go:31] will retry after 768.521609ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:29.008492   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:29.061428   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	W0429 08:08:29.061526   34742 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	
	W0429 08:08:29.061549   34742 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:29.061562   34742 start.go:128] duration metric: took 6m2.619010482s to createHost
	I0429 08:08:29.061627   34742 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 08:08:29.061692   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:29.110779   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:29.110867   34742 retry.go:31] will retry after 282.287092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:29.395520   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:29.449373   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:29.449471   34742 retry.go:31] will retry after 441.852596ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:29.892954   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:29.944280   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:29.944368   34742 retry.go:31] will retry after 583.346067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:30.528263   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:30.581301   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	W0429 08:08:30.581404   34742 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	
	W0429 08:08:30.581426   34742 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:30.581481   34742 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 08:08:30.581538   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:30.630561   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:30.630650   34742 retry.go:31] will retry after 265.834813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:30.898165   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:30.949080   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:30.949172   34742 retry.go:31] will retry after 392.104374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:31.343691   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:31.394637   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	I0429 08:08:31.394734   34742 retry.go:31] will retry after 808.24843ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:32.205377   34742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000
	W0429 08:08:32.258916   34742 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000 returned with exit code 1
	W0429 08:08:32.259022   34742 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	
	W0429 08:08:32.259037   34742 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-413000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-413000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	I0429 08:08:32.259048   34742 fix.go:56] duration metric: took 6m27.36278213s for fixHost
	I0429 08:08:32.259055   34742 start.go:83] releasing machines lock for "docker-flags-413000", held for 6m27.362825467s
	W0429 08:08:32.259128   34742 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-413000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-413000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 08:08:32.301500   34742 out.go:177] 
	W0429 08:08:32.322643   34742 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 08:08:32.322704   34742 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 08:08:32.322731   34742 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 08:08:32.343168   34742 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-413000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-413000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-413000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (200.542061ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-413000 host status: state: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-413000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-413000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-413000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (196.14959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-413000 host status: state: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-413000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-413000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-29 08:08:32.815261 -0700 PDT m=+6943.746510640
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-413000
helpers_test.go:235: (dbg) docker inspect docker-flags-413000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-413000",
	        "Id": "59da1f893583d4c52113096bd7003e6f94b9af6aa8e71e30e91fb87311a68631",
	        "Created": "2024-04-29T15:02:26.719797572Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-413000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-413000 -n docker-flags-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-413000 -n docker-flags-413000: exit status 7 (111.463688ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 08:08:32.976349   35452 status.go:249] status error: host: state: unknown state "docker-flags-413000": docker container inspect docker-flags-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-413000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-413000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-413000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-413000
--- FAIL: TestDockerFlags (758.51s)

                                                
                                    
x
+
TestForceSystemdFlag (755.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-789000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-789000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.615464053s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-789000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-789000" primary control-plane node in "force-systemd-flag-789000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-789000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:55:22.801666   34591 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:55:22.801972   34591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:55:22.801977   34591 out.go:304] Setting ErrFile to fd 2...
	I0429 07:55:22.801981   34591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:55:22.802149   34591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:55:22.803714   34591 out.go:298] Setting JSON to false
	I0429 07:55:22.826579   34591 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":21296,"bootTime":1714381226,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 07:55:22.826682   34591 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 07:55:22.848161   34591 out.go:177] * [force-systemd-flag-789000] minikube v1.33.0 on Darwin 14.4.1
	I0429 07:55:22.890185   34591 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 07:55:22.890238   34591 notify.go:220] Checking for updates...
	I0429 07:55:22.932044   34591 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 07:55:22.973829   34591 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 07:55:23.017170   34591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 07:55:23.058952   34591 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	I0429 07:55:23.100935   34591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 07:55:23.122955   34591 config.go:182] Loaded profile config "force-systemd-env-036000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:55:23.123121   34591 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 07:55:23.178566   34591 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 07:55:23.178722   34591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:55:23.291301   34591 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:225 SystemTime:2024-04-29 14:55:23.28061697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:55:23.333439   34591 out.go:177] * Using the docker driver based on user configuration
	I0429 07:55:23.354275   34591 start.go:297] selected driver: docker
	I0429 07:55:23.354339   34591 start.go:901] validating driver "docker" against <nil>
	I0429 07:55:23.354354   34591 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 07:55:23.358697   34591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:55:23.470587   34591 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:225 SystemTime:2024-04-29 14:55:23.460041234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:55:23.470777   34591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 07:55:23.470964   34591 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 07:55:23.492864   34591 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 07:55:23.514767   34591 cni.go:84] Creating CNI manager for ""
	I0429 07:55:23.514804   34591 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 07:55:23.514822   34591 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 07:55:23.514912   34591 start.go:340] cluster config:
	{Name:force-systemd-flag-789000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 07:55:23.536535   34591 out.go:177] * Starting "force-systemd-flag-789000" primary control-plane node in "force-systemd-flag-789000" cluster
	I0429 07:55:23.578529   34591 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 07:55:23.599217   34591 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 07:55:23.641475   34591 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:55:23.641515   34591 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 07:55:23.641547   34591 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 07:55:23.641562   34591 cache.go:56] Caching tarball of preloaded images
	I0429 07:55:23.641781   34591 preload.go:173] Found /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 07:55:23.641802   34591 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 07:55:23.642730   34591 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/force-systemd-flag-789000/config.json ...
	I0429 07:55:23.642917   34591 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/force-systemd-flag-789000/config.json: {Name:mk2f9635e26683e615ec4eab1e56fc1c18597f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 07:55:23.692372   34591 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 07:55:23.692400   34591 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 07:55:23.692417   34591 cache.go:194] Successfully downloaded all kic artifacts
	I0429 07:55:23.692452   34591 start.go:360] acquireMachinesLock for force-systemd-flag-789000: {Name:mk0121ac282d77548310daa90ec043ed28059e54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:55:23.692609   34591 start.go:364] duration metric: took 145.7µs to acquireMachinesLock for "force-systemd-flag-789000"
	I0429 07:55:23.692637   34591 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-789000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-789000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 07:55:23.692686   34591 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:55:23.735471   34591 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 07:55:23.735846   34591 start.go:159] libmachine.API.Create for "force-systemd-flag-789000" (driver="docker")
	I0429 07:55:23.735891   34591 client.go:168] LocalClient.Create starting
	I0429 07:55:23.736064   34591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:55:23.736160   34591 main.go:141] libmachine: Decoding PEM data...
	I0429 07:55:23.736193   34591 main.go:141] libmachine: Parsing certificate...
	I0429 07:55:23.736283   34591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:55:23.736356   34591 main.go:141] libmachine: Decoding PEM data...
	I0429 07:55:23.736371   34591 main.go:141] libmachine: Parsing certificate...
	I0429 07:55:23.737202   34591 cli_runner.go:164] Run: docker network inspect force-systemd-flag-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:55:23.788656   34591 cli_runner.go:211] docker network inspect force-systemd-flag-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:55:23.788761   34591 network_create.go:281] running [docker network inspect force-systemd-flag-789000] to gather additional debugging logs...
	I0429 07:55:23.788783   34591 cli_runner.go:164] Run: docker network inspect force-systemd-flag-789000
	W0429 07:55:23.837194   34591 cli_runner.go:211] docker network inspect force-systemd-flag-789000 returned with exit code 1
	I0429 07:55:23.837225   34591 network_create.go:284] error running [docker network inspect force-systemd-flag-789000]: docker network inspect force-systemd-flag-789000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-789000 not found
	I0429 07:55:23.837240   34591 network_create.go:286] output of [docker network inspect force-systemd-flag-789000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-789000 not found
	
	** /stderr **
	I0429 07:55:23.837357   34591 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:55:23.886850   34591 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:55:23.888474   34591 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:55:23.888825   34591 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00230eae0}
	I0429 07:55:23.888849   34591 network_create.go:124] attempt to create docker network force-systemd-flag-789000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 07:55:23.888919   34591 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-789000 force-systemd-flag-789000
	I0429 07:55:23.972764   34591 network_create.go:108] docker network force-systemd-flag-789000 192.168.67.0/24 created
	I0429 07:55:23.972810   34591 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-789000" container
	I0429 07:55:23.972926   34591 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:55:24.022951   34591 cli_runner.go:164] Run: docker volume create force-systemd-flag-789000 --label name.minikube.sigs.k8s.io=force-systemd-flag-789000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:55:24.071851   34591 oci.go:103] Successfully created a docker volume force-systemd-flag-789000
	I0429 07:55:24.071984   34591 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-789000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-789000 --entrypoint /usr/bin/test -v force-systemd-flag-789000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:55:24.392275   34591 oci.go:107] Successfully prepared a docker volume force-systemd-flag-789000
	I0429 07:55:24.392315   34591 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:55:24.392328   34591 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:55:24.392439   34591 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-789000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 08:01:23.740387   34591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 08:01:23.740543   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:23.792462   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:23.792605   34591 retry.go:31] will retry after 250.425632ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:24.044025   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:24.096112   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:24.096219   34591 retry.go:31] will retry after 198.953672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:24.297586   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:24.348586   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:24.348693   34591 retry.go:31] will retry after 380.525657ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:24.731565   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:24.785030   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:24.785136   34591 retry.go:31] will retry after 561.592329ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:25.349143   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:25.399463   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	W0429 08:01:25.399571   34591 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	
	W0429 08:01:25.399593   34591 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:25.399682   34591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 08:01:25.399736   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:25.447729   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:25.447837   34591 retry.go:31] will retry after 133.438146ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:25.582612   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:25.635766   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:25.635858   34591 retry.go:31] will retry after 261.823759ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:25.898485   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:25.952815   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:25.952913   34591 retry.go:31] will retry after 464.030282ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:26.418537   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:26.471320   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:26.471410   34591 retry.go:31] will retry after 796.219464ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:27.269983   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:01:27.322678   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	W0429 08:01:27.322773   34591 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	
	W0429 08:01:27.322792   34591 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:27.322805   34591 start.go:128] duration metric: took 6m3.628032326s to createHost
	I0429 08:01:27.322813   34591 start.go:83] releasing machines lock for "force-systemd-flag-789000", held for 6m3.628121744s
	W0429 08:01:27.322828   34591 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 08:01:27.323257   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:27.373399   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:27.373450   34591 delete.go:82] Unable to get host status for force-systemd-flag-789000, assuming it has already been deleted: state: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	W0429 08:01:27.373532   34591 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 08:01:27.373546   34591 start.go:728] Will try again in 5 seconds ...
	I0429 08:01:32.374847   34591 start.go:360] acquireMachinesLock for force-systemd-flag-789000: {Name:mk0121ac282d77548310daa90ec043ed28059e54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 08:01:32.375041   34591 start.go:364] duration metric: took 153.67µs to acquireMachinesLock for "force-systemd-flag-789000"
	I0429 08:01:32.375081   34591 start.go:96] Skipping create...Using existing machine configuration
	I0429 08:01:32.375099   34591 fix.go:54] fixHost starting: 
	I0429 08:01:32.375508   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:32.427247   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:32.427300   34591 fix.go:112] recreateIfNeeded on force-systemd-flag-789000: state= err=unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:32.427325   34591 fix.go:117] machineExists: false. err=machine does not exist
	I0429 08:01:32.469778   34591 out.go:177] * docker "force-systemd-flag-789000" container is missing, will recreate.
	I0429 08:01:32.490426   34591 delete.go:124] DEMOLISHING force-systemd-flag-789000 ...
	I0429 08:01:32.490669   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:32.538880   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	W0429 08:01:32.538949   34591 stop.go:83] unable to get state: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:32.538969   34591 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:32.539352   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:32.587717   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:32.587779   34591 delete.go:82] Unable to get host status for force-systemd-flag-789000, assuming it has already been deleted: state: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:32.587861   34591 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-789000
	W0429 08:01:32.635640   34591 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:32.635680   34591 kic.go:371] could not find the container force-systemd-flag-789000 to remove it. will try anyways
	I0429 08:01:32.635776   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:32.683637   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	W0429 08:01:32.683688   34591 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:32.683774   34591 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-789000 /bin/bash -c "sudo init 0"
	W0429 08:01:32.731820   34591 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-789000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 08:01:32.731852   34591 oci.go:650] error shutdown force-systemd-flag-789000: docker exec --privileged -t force-systemd-flag-789000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:33.733283   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:33.785844   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:33.785899   34591 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:33.785908   34591 oci.go:664] temporary error: container force-systemd-flag-789000 status is  but expect it to be exited
	I0429 08:01:33.785932   34591 retry.go:31] will retry after 603.308154ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:34.390174   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:34.444489   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:34.444549   34591 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:34.444557   34591 oci.go:664] temporary error: container force-systemd-flag-789000 status is  but expect it to be exited
	I0429 08:01:34.444582   34591 retry.go:31] will retry after 711.332146ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:35.158284   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:35.209982   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:35.210054   34591 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:35.210066   34591 oci.go:664] temporary error: container force-systemd-flag-789000 status is  but expect it to be exited
	I0429 08:01:35.210091   34591 retry.go:31] will retry after 1.189029115s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:36.401485   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:36.455772   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:36.455821   34591 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:36.455834   34591 oci.go:664] temporary error: container force-systemd-flag-789000 status is  but expect it to be exited
	I0429 08:01:36.455858   34591 retry.go:31] will retry after 857.969867ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:37.316215   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:37.368743   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:37.368792   34591 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:37.368801   34591 oci.go:664] temporary error: container force-systemd-flag-789000 status is  but expect it to be exited
	I0429 08:01:37.368827   34591 retry.go:31] will retry after 2.865959624s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:40.237126   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:40.288835   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:40.288882   34591 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:40.288892   34591 oci.go:664] temporary error: container force-systemd-flag-789000 status is  but expect it to be exited
	I0429 08:01:40.288920   34591 retry.go:31] will retry after 2.842233297s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:43.133564   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:43.185235   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:43.185290   34591 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:43.185300   34591 oci.go:664] temporary error: container force-systemd-flag-789000 status is  but expect it to be exited
	I0429 08:01:43.185329   34591 retry.go:31] will retry after 7.224282568s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:50.411432   34591 cli_runner.go:164] Run: docker container inspect force-systemd-flag-789000 --format={{.State.Status}}
	W0429 08:01:50.464783   34591 cli_runner.go:211] docker container inspect force-systemd-flag-789000 --format={{.State.Status}} returned with exit code 1
	I0429 08:01:50.464839   34591 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:01:50.464850   34591 oci.go:664] temporary error: container force-systemd-flag-789000 status is  but expect it to be exited
	I0429 08:01:50.464880   34591 oci.go:88] couldn't shut down force-systemd-flag-789000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	 
	I0429 08:01:50.464957   34591 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-789000
	I0429 08:01:50.514730   34591 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-789000
	W0429 08:01:50.563073   34591 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:50.563188   34591 cli_runner.go:164] Run: docker network inspect force-systemd-flag-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 08:01:50.611824   34591 cli_runner.go:164] Run: docker network rm force-systemd-flag-789000
	I0429 08:01:50.717111   34591 fix.go:124] Sleeping 1 second for extra luck!
	I0429 08:01:51.717951   34591 start.go:125] createHost starting for "" (driver="docker")
	I0429 08:01:51.761558   34591 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 08:01:51.761741   34591 start.go:159] libmachine.API.Create for "force-systemd-flag-789000" (driver="docker")
	I0429 08:01:51.761767   34591 client.go:168] LocalClient.Create starting
	I0429 08:01:51.762023   34591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 08:01:51.762118   34591 main.go:141] libmachine: Decoding PEM data...
	I0429 08:01:51.762144   34591 main.go:141] libmachine: Parsing certificate...
	I0429 08:01:51.762219   34591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 08:01:51.762293   34591 main.go:141] libmachine: Decoding PEM data...
	I0429 08:01:51.762309   34591 main.go:141] libmachine: Parsing certificate...
	I0429 08:01:51.763157   34591 cli_runner.go:164] Run: docker network inspect force-systemd-flag-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 08:01:51.813514   34591 cli_runner.go:211] docker network inspect force-systemd-flag-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 08:01:51.813608   34591 network_create.go:281] running [docker network inspect force-systemd-flag-789000] to gather additional debugging logs...
	I0429 08:01:51.813624   34591 cli_runner.go:164] Run: docker network inspect force-systemd-flag-789000
	W0429 08:01:51.861610   34591 cli_runner.go:211] docker network inspect force-systemd-flag-789000 returned with exit code 1
	I0429 08:01:51.861640   34591 network_create.go:284] error running [docker network inspect force-systemd-flag-789000]: docker network inspect force-systemd-flag-789000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-789000 not found
	I0429 08:01:51.861655   34591 network_create.go:286] output of [docker network inspect force-systemd-flag-789000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-789000 not found
	
	** /stderr **
	I0429 08:01:51.861794   34591 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 08:01:51.912231   34591 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:01:51.913824   34591 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:01:51.915495   34591 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:01:51.917088   34591 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:01:51.918387   34591 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 08:01:51.918674   34591 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022cc520}
	I0429 08:01:51.918685   34591 network_create.go:124] attempt to create docker network force-systemd-flag-789000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0429 08:01:51.918761   34591 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-789000 force-systemd-flag-789000
	I0429 08:01:52.002774   34591 network_create.go:108] docker network force-systemd-flag-789000 192.168.94.0/24 created
	I0429 08:01:52.002811   34591 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-789000" container
	I0429 08:01:52.002911   34591 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 08:01:52.053129   34591 cli_runner.go:164] Run: docker volume create force-systemd-flag-789000 --label name.minikube.sigs.k8s.io=force-systemd-flag-789000 --label created_by.minikube.sigs.k8s.io=true
	I0429 08:01:52.101321   34591 oci.go:103] Successfully created a docker volume force-systemd-flag-789000
	I0429 08:01:52.101433   34591 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-789000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-789000 --entrypoint /usr/bin/test -v force-systemd-flag-789000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 08:01:52.337259   34591 oci.go:107] Successfully prepared a docker volume force-systemd-flag-789000
	I0429 08:01:52.337292   34591 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 08:01:52.337305   34591 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 08:01:52.337420   34591 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-789000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 08:07:51.766212   34591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 08:07:51.766350   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:51.818545   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:51.818664   34591 retry.go:31] will retry after 299.688764ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:52.119006   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:52.173514   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:52.173618   34591 retry.go:31] will retry after 256.824176ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:52.432835   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:52.485658   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:52.485779   34591 retry.go:31] will retry after 596.89205ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:53.084644   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:53.139439   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	W0429 08:07:53.139562   34591 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	
	W0429 08:07:53.139580   34591 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:53.139634   34591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 08:07:53.139707   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:53.189916   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:53.190022   34591 retry.go:31] will retry after 272.084095ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:53.463056   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:53.515056   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:53.515155   34591 retry.go:31] will retry after 550.185907ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:54.067756   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:54.121347   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:54.121443   34591 retry.go:31] will retry after 550.016618ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:54.673862   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:54.726335   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	W0429 08:07:54.726448   34591 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	
	W0429 08:07:54.726466   34591 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:54.726479   34591 start.go:128] duration metric: took 6m3.006372574s to createHost
	I0429 08:07:54.726542   34591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 08:07:54.726597   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:54.776629   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:54.776727   34591 retry.go:31] will retry after 138.051122ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:54.916003   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:54.967444   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:54.967540   34591 retry.go:31] will retry after 430.038457ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:55.399972   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:55.452725   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:55.452822   34591 retry.go:31] will retry after 484.071891ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:55.939318   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:55.990079   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	W0429 08:07:55.990180   34591 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	
	W0429 08:07:55.990196   34591 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:55.990261   34591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 08:07:55.990317   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:56.038814   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:56.038906   34591 retry.go:31] will retry after 195.651245ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:56.235929   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:56.287611   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:56.287711   34591 retry.go:31] will retry after 217.489106ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:56.507584   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:56.558540   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	I0429 08:07:56.558631   34591 retry.go:31] will retry after 553.901671ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:57.114918   34591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000
	W0429 08:07:57.169396   34591 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000 returned with exit code 1
	W0429 08:07:57.169507   34591 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	
	W0429 08:07:57.169531   34591 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	I0429 08:07:57.169544   34591 fix.go:56] duration metric: took 6m24.79225273s for fixHost
	I0429 08:07:57.169551   34591 start.go:83] releasing machines lock for "force-systemd-flag-789000", held for 6m24.792302401s
	W0429 08:07:57.169625   34591 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-789000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-789000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 08:07:57.212031   34591 out.go:177] 
	W0429 08:07:57.232930   34591 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 08:07:57.232988   34591 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 08:07:57.233032   34591 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 08:07:57.275063   34591 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-789000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-789000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-789000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (201.415583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-789000 host status: state: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-789000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-29 08:07:57.557443 -0700 PDT m=+6908.488894158
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-789000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-789000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-789000",
	        "Id": "2ad65f16c8fa9b233999f55ee415a07aa637348f4f6f82d62dbf23fd67cafc71",
	        "Created": "2024-04-29T15:01:51.963487863Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-789000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-789000 -n force-systemd-flag-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-789000 -n force-systemd-flag-789000: exit status 7 (119.430049ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 08:07:57.727460   35328 status.go:249] status error: host: state: unknown state "force-systemd-flag-789000": docker container inspect force-systemd-flag-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-789000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-789000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-789000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-789000
--- FAIL: TestForceSystemdFlag (755.71s)

                                                
                                    
x
+
TestForceSystemdEnv (750.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-036000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0429 07:44:06.890385   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:44:22.427296   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:47:10.027444   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:49:06.892431   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:49:22.429550   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:52:25.556592   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:54:06.968306   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:54:22.504986   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-036000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m29.274986865s)

                                                
                                                
-- stdout --
	* [force-systemd-env-036000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-036000" primary control-plane node in "force-systemd-env-036000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-036000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:43:24.801303   33745 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:43:24.801520   33745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:43:24.801526   33745 out.go:304] Setting ErrFile to fd 2...
	I0429 07:43:24.801529   33745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:43:24.801703   33745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:43:24.803230   33745 out.go:298] Setting JSON to false
	I0429 07:43:24.825642   33745 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":20578,"bootTime":1714381226,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 07:43:24.825746   33745 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 07:43:24.847794   33745 out.go:177] * [force-systemd-env-036000] minikube v1.33.0 on Darwin 14.4.1
	I0429 07:43:24.890423   33745 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 07:43:24.890459   33745 notify.go:220] Checking for updates...
	I0429 07:43:24.933123   33745 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 07:43:24.954340   33745 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 07:43:24.975252   33745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 07:43:24.996257   33745 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	I0429 07:43:25.017339   33745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0429 07:43:25.039183   33745 config.go:182] Loaded profile config "offline-docker-641000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:43:25.039328   33745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 07:43:25.094862   33745 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 07:43:25.095030   33745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:43:25.201016   33745 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-29 14:43:25.190274598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:43:25.222779   33745 out.go:177] * Using the docker driver based on user configuration
	I0429 07:43:25.243469   33745 start.go:297] selected driver: docker
	I0429 07:43:25.243506   33745 start.go:901] validating driver "docker" against <nil>
	I0429 07:43:25.243521   33745 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 07:43:25.247813   33745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:43:25.354552   33745 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-29 14:43:25.343783348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:43:25.354757   33745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 07:43:25.354999   33745 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 07:43:25.376455   33745 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 07:43:25.399287   33745 cni.go:84] Creating CNI manager for ""
	I0429 07:43:25.399329   33745 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 07:43:25.399342   33745 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 07:43:25.399429   33745 start.go:340] cluster config:
	{Name:force-systemd-env-036000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-036000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 07:43:25.420443   33745 out.go:177] * Starting "force-systemd-env-036000" primary control-plane node in "force-systemd-env-036000" cluster
	I0429 07:43:25.462233   33745 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 07:43:25.483408   33745 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 07:43:25.525275   33745 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:43:25.525353   33745 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 07:43:25.525373   33745 cache.go:56] Caching tarball of preloaded images
	I0429 07:43:25.525380   33745 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 07:43:25.525588   33745 preload.go:173] Found /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 07:43:25.525611   33745 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 07:43:25.525738   33745 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/force-systemd-env-036000/config.json ...
	I0429 07:43:25.525804   33745 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/force-systemd-env-036000/config.json: {Name:mk97104acc4742753cb4c2f21097526a6f860d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 07:43:25.577254   33745 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 07:43:25.577279   33745 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 07:43:25.577295   33745 cache.go:194] Successfully downloaded all kic artifacts
	I0429 07:43:25.577331   33745 start.go:360] acquireMachinesLock for force-systemd-env-036000: {Name:mk13d5d3e6a35e47e87bf1177eb13317fffbe2af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:43:25.577491   33745 start.go:364] duration metric: took 148.807µs to acquireMachinesLock for "force-systemd-env-036000"
	I0429 07:43:25.577520   33745 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-036000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-036000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 07:43:25.577787   33745 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:43:25.599444   33745 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 07:43:25.599875   33745 start.go:159] libmachine.API.Create for "force-systemd-env-036000" (driver="docker")
	I0429 07:43:25.599928   33745 client.go:168] LocalClient.Create starting
	I0429 07:43:25.600126   33745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:43:25.600225   33745 main.go:141] libmachine: Decoding PEM data...
	I0429 07:43:25.600260   33745 main.go:141] libmachine: Parsing certificate...
	I0429 07:43:25.600363   33745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:43:25.600437   33745 main.go:141] libmachine: Decoding PEM data...
	I0429 07:43:25.600453   33745 main.go:141] libmachine: Parsing certificate...
	I0429 07:43:25.601349   33745 cli_runner.go:164] Run: docker network inspect force-systemd-env-036000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:43:25.650076   33745 cli_runner.go:211] docker network inspect force-systemd-env-036000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:43:25.650174   33745 network_create.go:281] running [docker network inspect force-systemd-env-036000] to gather additional debugging logs...
	I0429 07:43:25.650189   33745 cli_runner.go:164] Run: docker network inspect force-systemd-env-036000
	W0429 07:43:25.698045   33745 cli_runner.go:211] docker network inspect force-systemd-env-036000 returned with exit code 1
	I0429 07:43:25.698072   33745 network_create.go:284] error running [docker network inspect force-systemd-env-036000]: docker network inspect force-systemd-env-036000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-036000 not found
	I0429 07:43:25.698085   33745 network_create.go:286] output of [docker network inspect force-systemd-env-036000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-036000 not found
	
	** /stderr **
	I0429 07:43:25.698230   33745 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:43:25.748122   33745 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:43:25.749638   33745 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:43:25.751214   33745 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:43:25.751556   33745 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020e7ff0}
	I0429 07:43:25.751574   33745 network_create.go:124] attempt to create docker network force-systemd-env-036000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0429 07:43:25.751644   33745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-036000 force-systemd-env-036000
	W0429 07:43:25.800346   33745 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-036000 force-systemd-env-036000 returned with exit code 1
	W0429 07:43:25.800380   33745 network_create.go:149] failed to create docker network force-systemd-env-036000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-036000 force-systemd-env-036000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0429 07:43:25.800398   33745 network_create.go:116] failed to create docker network force-systemd-env-036000 192.168.76.0/24, will retry: subnet is taken
	I0429 07:43:25.801981   33745 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:43:25.802335   33745 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023e4230}
	I0429 07:43:25.802347   33745 network_create.go:124] attempt to create docker network force-systemd-env-036000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0429 07:43:25.802415   33745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-036000 force-systemd-env-036000
	I0429 07:43:25.893232   33745 network_create.go:108] docker network force-systemd-env-036000 192.168.85.0/24 created
	I0429 07:43:25.893269   33745 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-036000" container
	I0429 07:43:25.893385   33745 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:43:25.943503   33745 cli_runner.go:164] Run: docker volume create force-systemd-env-036000 --label name.minikube.sigs.k8s.io=force-systemd-env-036000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:43:25.992827   33745 oci.go:103] Successfully created a docker volume force-systemd-env-036000
	I0429 07:43:25.992944   33745 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-036000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-036000 --entrypoint /usr/bin/test -v force-systemd-env-036000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:43:26.303917   33745 oci.go:107] Successfully prepared a docker volume force-systemd-env-036000
	I0429 07:43:26.303970   33745 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:43:26.303983   33745 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:43:26.304096   33745 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-036000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 07:49:25.604620   33745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:49:25.604784   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:49:25.658444   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:49:25.658584   33745 retry.go:31] will retry after 240.525407ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:25.899647   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:49:25.951664   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:49:25.951779   33745 retry.go:31] will retry after 536.646439ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:26.490826   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:49:26.543444   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:49:26.543562   33745 retry.go:31] will retry after 523.366693ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:27.068184   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:49:27.119464   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	W0429 07:49:27.119573   33745 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	
	W0429 07:49:27.119595   33745 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:27.119657   33745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:49:27.119716   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:49:27.167829   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:49:27.167917   33745 retry.go:31] will retry after 291.407371ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:27.461761   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:49:27.514525   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:49:27.514629   33745 retry.go:31] will retry after 342.120601ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:27.859127   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:49:27.911722   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:49:27.911822   33745 retry.go:31] will retry after 465.130723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:28.379358   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:49:28.428790   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	W0429 07:49:28.428891   33745 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	
	W0429 07:49:28.428911   33745 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:28.428939   33745 start.go:128] duration metric: took 6m2.848807698s to createHost
	I0429 07:49:28.428949   33745 start.go:83] releasing machines lock for "force-systemd-env-036000", held for 6m2.849118744s
	W0429 07:49:28.428965   33745 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 07:49:28.429401   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:28.477745   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:28.477808   33745 delete.go:82] Unable to get host status for force-systemd-env-036000, assuming it has already been deleted: state: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	W0429 07:49:28.477908   33745 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 07:49:28.477917   33745 start.go:728] Will try again in 5 seconds ...
	I0429 07:49:33.478895   33745 start.go:360] acquireMachinesLock for force-systemd-env-036000: {Name:mk13d5d3e6a35e47e87bf1177eb13317fffbe2af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:49:33.479098   33745 start.go:364] duration metric: took 157.684µs to acquireMachinesLock for "force-systemd-env-036000"
	I0429 07:49:33.479137   33745 start.go:96] Skipping create...Using existing machine configuration
	I0429 07:49:33.479156   33745 fix.go:54] fixHost starting: 
	I0429 07:49:33.479609   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:33.529335   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:33.529390   33745 fix.go:112] recreateIfNeeded on force-systemd-env-036000: state= err=unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:33.529413   33745 fix.go:117] machineExists: false. err=machine does not exist
	I0429 07:49:33.571675   33745 out.go:177] * docker "force-systemd-env-036000" container is missing, will recreate.
	I0429 07:49:33.592858   33745 delete.go:124] DEMOLISHING force-systemd-env-036000 ...
	I0429 07:49:33.593050   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:33.642306   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	W0429 07:49:33.642360   33745 stop.go:83] unable to get state: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:33.642380   33745 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:33.642748   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:33.690656   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:33.690719   33745 delete.go:82] Unable to get host status for force-systemd-env-036000, assuming it has already been deleted: state: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:33.690808   33745 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-036000
	W0429 07:49:33.739023   33745 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-036000 returned with exit code 1
	I0429 07:49:33.739060   33745 kic.go:371] could not find the container force-systemd-env-036000 to remove it. will try anyways
	I0429 07:49:33.739142   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:33.787198   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	W0429 07:49:33.787246   33745 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:33.787332   33745 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-036000 /bin/bash -c "sudo init 0"
	W0429 07:49:33.835353   33745 cli_runner.go:211] docker exec --privileged -t force-systemd-env-036000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 07:49:33.835386   33745 oci.go:650] error shutdown force-systemd-env-036000: docker exec --privileged -t force-systemd-env-036000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:34.836010   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:34.888150   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:34.888208   33745 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:34.888223   33745 oci.go:664] temporary error: container force-systemd-env-036000 status is  but expect it to be exited
	I0429 07:49:34.888247   33745 retry.go:31] will retry after 587.057894ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:35.475813   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:35.526497   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:35.526546   33745 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:35.526562   33745 oci.go:664] temporary error: container force-systemd-env-036000 status is  but expect it to be exited
	I0429 07:49:35.526588   33745 retry.go:31] will retry after 745.233998ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:36.273542   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:36.323984   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:36.324030   33745 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:36.324039   33745 oci.go:664] temporary error: container force-systemd-env-036000 status is  but expect it to be exited
	I0429 07:49:36.324063   33745 retry.go:31] will retry after 1.362771848s: couldn't verify container is exited. %v: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:37.689232   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:37.742896   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:37.742949   33745 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:37.742962   33745 oci.go:664] temporary error: container force-systemd-env-036000 status is  but expect it to be exited
	I0429 07:49:37.742988   33745 retry.go:31] will retry after 1.039468942s: couldn't verify container is exited. %v: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:38.784092   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:38.835389   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:38.835449   33745 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:38.835463   33745 oci.go:664] temporary error: container force-systemd-env-036000 status is  but expect it to be exited
	I0429 07:49:38.835496   33745 retry.go:31] will retry after 1.556733646s: couldn't verify container is exited. %v: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:40.394559   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:40.448405   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:40.448455   33745 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:40.448467   33745 oci.go:664] temporary error: container force-systemd-env-036000 status is  but expect it to be exited
	I0429 07:49:40.448494   33745 retry.go:31] will retry after 1.998623881s: couldn't verify container is exited. %v: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:42.447502   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:42.500649   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:42.500698   33745 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:42.500712   33745 oci.go:664] temporary error: container force-systemd-env-036000 status is  but expect it to be exited
	I0429 07:49:42.500739   33745 retry.go:31] will retry after 3.438388274s: couldn't verify container is exited. %v: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:45.941513   33745 cli_runner.go:164] Run: docker container inspect force-systemd-env-036000 --format={{.State.Status}}
	W0429 07:49:45.995446   33745 cli_runner.go:211] docker container inspect force-systemd-env-036000 --format={{.State.Status}} returned with exit code 1
	I0429 07:49:45.995496   33745 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:49:45.995507   33745 oci.go:664] temporary error: container force-systemd-env-036000 status is  but expect it to be exited
	I0429 07:49:45.995538   33745 oci.go:88] couldn't shut down force-systemd-env-036000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	 
	I0429 07:49:45.995617   33745 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-036000
	I0429 07:49:46.044205   33745 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-036000
	W0429 07:49:46.092341   33745 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-036000 returned with exit code 1
	I0429 07:49:46.092451   33745 cli_runner.go:164] Run: docker network inspect force-systemd-env-036000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:49:46.140600   33745 cli_runner.go:164] Run: docker network rm force-systemd-env-036000
	I0429 07:49:46.239556   33745 fix.go:124] Sleeping 1 second for extra luck!
	I0429 07:49:47.241783   33745 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:49:47.263751   33745 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0429 07:49:47.263912   33745 start.go:159] libmachine.API.Create for "force-systemd-env-036000" (driver="docker")
	I0429 07:49:47.263958   33745 client.go:168] LocalClient.Create starting
	I0429 07:49:47.264167   33745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:49:47.264281   33745 main.go:141] libmachine: Decoding PEM data...
	I0429 07:49:47.264307   33745 main.go:141] libmachine: Parsing certificate...
	I0429 07:49:47.264397   33745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:49:47.264472   33745 main.go:141] libmachine: Decoding PEM data...
	I0429 07:49:47.264497   33745 main.go:141] libmachine: Parsing certificate...
	I0429 07:49:47.265245   33745 cli_runner.go:164] Run: docker network inspect force-systemd-env-036000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:49:47.315550   33745 cli_runner.go:211] docker network inspect force-systemd-env-036000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:49:47.315645   33745 network_create.go:281] running [docker network inspect force-systemd-env-036000] to gather additional debugging logs...
	I0429 07:49:47.315663   33745 cli_runner.go:164] Run: docker network inspect force-systemd-env-036000
	W0429 07:49:47.363936   33745 cli_runner.go:211] docker network inspect force-systemd-env-036000 returned with exit code 1
	I0429 07:49:47.363971   33745 network_create.go:284] error running [docker network inspect force-systemd-env-036000]: docker network inspect force-systemd-env-036000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-036000 not found
	I0429 07:49:47.363986   33745 network_create.go:286] output of [docker network inspect force-systemd-env-036000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-036000 not found
	
	** /stderr **
	I0429 07:49:47.364113   33745 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:49:47.413878   33745 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:47.415187   33745 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:47.416719   33745 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:47.418053   33745 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:47.419613   33745 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:47.421287   33745 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:49:47.421673   33745 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020e6c40}
	I0429 07:49:47.421685   33745 network_create.go:124] attempt to create docker network force-systemd-env-036000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0429 07:49:47.421749   33745 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-036000 force-systemd-env-036000
	I0429 07:49:47.507239   33745 network_create.go:108] docker network force-systemd-env-036000 192.168.103.0/24 created
	I0429 07:49:47.507280   33745 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-036000" container
	I0429 07:49:47.507380   33745 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:49:47.558137   33745 cli_runner.go:164] Run: docker volume create force-systemd-env-036000 --label name.minikube.sigs.k8s.io=force-systemd-env-036000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:49:47.606344   33745 oci.go:103] Successfully created a docker volume force-systemd-env-036000
	I0429 07:49:47.606486   33745 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-036000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-036000 --entrypoint /usr/bin/test -v force-systemd-env-036000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:49:47.850306   33745 oci.go:107] Successfully prepared a docker volume force-systemd-env-036000
	I0429 07:49:47.850382   33745 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:49:47.850397   33745 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:49:47.850502   33745 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-036000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 07:55:47.342805   33745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:55:47.342941   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:47.394330   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:47.394447   33745 retry.go:31] will retry after 373.639774ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:47.770477   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:47.832345   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:47.832439   33745 retry.go:31] will retry after 364.849888ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:48.199662   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:48.251548   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:48.251653   33745 retry.go:31] will retry after 760.825142ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:49.014954   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:49.069015   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	W0429 07:55:49.069134   33745 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	
	W0429 07:55:49.069153   33745 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:49.069215   33745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:55:49.069285   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:49.116467   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:49.116562   33745 retry.go:31] will retry after 183.801719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:49.302705   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:49.356353   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:49.356447   33745 retry.go:31] will retry after 468.708031ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:49.826176   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:49.879101   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:49.879200   33745 retry.go:31] will retry after 803.148487ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:50.684830   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:50.738365   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	W0429 07:55:50.738471   33745 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	
	W0429 07:55:50.738488   33745 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:50.738499   33745 start.go:128] duration metric: took 6m3.420161896s to createHost
	I0429 07:55:50.738571   33745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:55:50.738631   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:50.788057   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:50.788155   33745 retry.go:31] will retry after 266.368704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:51.055425   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:51.104961   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:51.105058   33745 retry.go:31] will retry after 361.060411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:51.467319   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:51.521224   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:51.521314   33745 retry.go:31] will retry after 498.089823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:52.020979   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:52.074954   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:52.075061   33745 retry.go:31] will retry after 582.306892ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:52.659733   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:52.709904   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	W0429 07:55:52.710012   33745 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	
	W0429 07:55:52.710031   33745 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:52.710088   33745 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:55:52.710143   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:52.757996   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:52.758103   33745 retry.go:31] will retry after 338.567865ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:53.097831   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:53.147981   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:53.148073   33745 retry.go:31] will retry after 318.594303ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:53.468491   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:53.517655   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	I0429 07:55:53.517749   33745 retry.go:31] will retry after 352.845076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:53.871545   33745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000
	W0429 07:55:53.923526   33745 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000 returned with exit code 1
	W0429 07:55:53.923634   33745 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	
	W0429 07:55:53.923648   33745 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-036000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-036000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	I0429 07:55:53.923655   33745 fix.go:56] duration metric: took 6m20.367864298s for fixHost
	I0429 07:55:53.923662   33745 start.go:83] releasing machines lock for "force-systemd-env-036000", held for 6m20.367913286s
	W0429 07:55:53.923738   33745 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-036000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-036000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 07:55:53.969181   33745 out.go:177] 
	W0429 07:55:53.990225   33745 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 07:55:53.990281   33745 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 07:55:53.990303   33745 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 07:55:54.011962   33745 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-036000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-036000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-036000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (197.083063ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-036000 host status: state: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-036000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-29 07:55:54.304076 -0700 PDT m=+6185.239653563
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-036000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-036000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-036000",
	        "Id": "99ce7a08db3fbf1e0c5415ad79be0b14f24e7e4973471e10475b816af844eeaf",
	        "Created": "2024-04-29T14:49:47.467656587Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-036000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-036000 -n force-systemd-env-036000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-036000 -n force-systemd-env-036000: exit status 7 (112.282537ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:55:54.465864   34718 status.go:249] status error: host: state: unknown state "force-systemd-env-036000": docker container inspect force-systemd-env-036000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-036000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-036000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-036000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-036000
--- FAIL: TestForceSystemdEnv (750.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (885.96s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-791000 ssh -- ls /minikube-host
E0429 06:40:29.836937   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:44:06.762510   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:44:22.297405   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:45:45.344307   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:49:06.762509   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:49:22.298748   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:54:06.761189   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:54:22.297447   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-791000 ssh -- ls /minikube-host: signal: killed (14m45.535073573s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-791000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-791000
helpers_test.go:235: (dbg) docker inspect mount-start-2-791000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3c74c2e03922600d5f263534cbea7d9750ba6e6a940b23738e3565ba5921a00",
	        "Created": "2024-04-29T13:39:42.718402928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120366,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T13:39:42.87717168Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c2e7b1115438f0e876ee0c793febc72a876a26c7b12b8e5475b223c894686c4",
	        "ResolvConfPath": "/var/lib/docker/containers/f3c74c2e03922600d5f263534cbea7d9750ba6e6a940b23738e3565ba5921a00/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3c74c2e03922600d5f263534cbea7d9750ba6e6a940b23738e3565ba5921a00/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3c74c2e03922600d5f263534cbea7d9750ba6e6a940b23738e3565ba5921a00/hosts",
	        "LogPath": "/var/lib/docker/containers/f3c74c2e03922600d5f263534cbea7d9750ba6e6a940b23738e3565ba5921a00/f3c74c2e03922600d5f263534cbea7d9750ba6e6a940b23738e3565ba5921a00-json.log",
	        "Name": "/mount-start-2-791000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-791000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-791000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9d32d9c652c6911e2fd323fc5cfa0133045db6c0494ef3cb4e12866c5f2ff367-init/diff:/var/lib/docker/overlay2/9d9541484b509d26c0cfc0c729403d4eaba281911856370b3a659a3fffdb84db/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9d32d9c652c6911e2fd323fc5cfa0133045db6c0494ef3cb4e12866c5f2ff367/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9d32d9c652c6911e2fd323fc5cfa0133045db6c0494ef3cb4e12866c5f2ff367/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9d32d9c652c6911e2fd323fc5cfa0133045db6c0494ef3cb4e12866c5f2ff367/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-791000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-791000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-791000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-791000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-791000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aae9a9a84c263432c8b4efbecb501b552093610f5737edf8ab1563978a203b30",
	            "SandboxKey": "/var/run/docker/netns/aae9a9a84c26",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52516"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52512"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52514"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52515"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-791000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "0a025094f1090234b2236b6210f50e52cef08aca576ef81e3ad24949b0e51800",
	                    "EndpointID": "0cba6967374c56e1c4ec484a5e58163b8896fc1b6316d374721ab81bc6387d16",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-2-791000",
	                        "f3c74c2e0392"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-791000 -n mount-start-2-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-791000 -n mount-start-2-791000: exit status 6 (372.66852ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 06:54:33.987027   30839 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-791000" does not appear in /Users/jenkins/minikube-integration/18773-22625/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-791000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountSecond (885.96s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (756.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-548000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0429 06:57:09.891989   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:59:06.762486   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:59:22.298874   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:02:25.346688   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:04:06.761571   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:04:22.296794   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-548000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m36.435631963s)

                                                
                                                
-- stdout --
	* [multinode-548000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-548000" primary control-plane node in "multinode-548000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-548000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 06:55:44.673662   30971 out.go:291] Setting OutFile to fd 1 ...
	I0429 06:55:44.673933   30971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:55:44.673939   30971 out.go:304] Setting ErrFile to fd 2...
	I0429 06:55:44.673943   30971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:55:44.674117   30971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 06:55:44.675546   30971 out.go:298] Setting JSON to false
	I0429 06:55:44.697864   30971 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":17718,"bootTime":1714381226,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 06:55:44.697953   30971 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 06:55:44.720022   30971 out.go:177] * [multinode-548000] minikube v1.33.0 on Darwin 14.4.1
	I0429 06:55:44.761806   30971 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 06:55:44.782649   30971 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 06:55:44.761838   30971 notify.go:220] Checking for updates...
	I0429 06:55:44.803221   30971 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 06:55:44.824545   30971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 06:55:44.845551   30971 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	I0429 06:55:44.866484   30971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 06:55:44.888162   30971 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 06:55:44.943601   30971 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 06:55:44.943772   30971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 06:55:45.051137   30971 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-29 13:55:45.040826642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 06:55:45.093571   30971 out.go:177] * Using the docker driver based on user configuration
	I0429 06:55:45.114557   30971 start.go:297] selected driver: docker
	I0429 06:55:45.114589   30971 start.go:901] validating driver "docker" against <nil>
	I0429 06:55:45.114604   30971 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 06:55:45.118968   30971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 06:55:45.227988   30971 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-29 13:55:45.217885471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 06:55:45.228169   30971 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 06:55:45.228368   30971 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 06:55:45.249625   30971 out.go:177] * Using Docker Desktop driver with root privileges
	I0429 06:55:45.270458   30971 cni.go:84] Creating CNI manager for ""
	I0429 06:55:45.270492   30971 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 06:55:45.270506   30971 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 06:55:45.270613   30971 start.go:340] cluster config:
	{Name:multinode-548000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-548000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 06:55:45.291609   30971 out.go:177] * Starting "multinode-548000" primary control-plane node in "multinode-548000" cluster
	I0429 06:55:45.333463   30971 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 06:55:45.354466   30971 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 06:55:45.396358   30971 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 06:55:45.396400   30971 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 06:55:45.396432   30971 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 06:55:45.396456   30971 cache.go:56] Caching tarball of preloaded images
	I0429 06:55:45.396666   30971 preload.go:173] Found /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 06:55:45.396688   30971 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 06:55:45.398281   30971 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/multinode-548000/config.json ...
	I0429 06:55:45.398393   30971 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/multinode-548000/config.json: {Name:mk8993bf5977bd25e94bc474193355fd17c6d5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 06:55:45.448117   30971 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 06:55:45.448134   30971 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 06:55:45.448153   30971 cache.go:194] Successfully downloaded all kic artifacts
	I0429 06:55:45.448209   30971 start.go:360] acquireMachinesLock for multinode-548000: {Name:mkf8e57cc3eeb260fdebcc4e317197efd6f66b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 06:55:45.448379   30971 start.go:364] duration metric: took 158.327µs to acquireMachinesLock for "multinode-548000"
	I0429 06:55:45.448415   30971 start.go:93] Provisioning new machine with config: &{Name:multinode-548000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-548000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 06:55:45.448509   30971 start.go:125] createHost starting for "" (driver="docker")
	I0429 06:55:45.490348   30971 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 06:55:45.490805   30971 start.go:159] libmachine.API.Create for "multinode-548000" (driver="docker")
	I0429 06:55:45.490856   30971 client.go:168] LocalClient.Create starting
	I0429 06:55:45.491083   30971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 06:55:45.491189   30971 main.go:141] libmachine: Decoding PEM data...
	I0429 06:55:45.491230   30971 main.go:141] libmachine: Parsing certificate...
	I0429 06:55:45.491331   30971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 06:55:45.491387   30971 main.go:141] libmachine: Decoding PEM data...
	I0429 06:55:45.491398   30971 main.go:141] libmachine: Parsing certificate...
	I0429 06:55:45.492021   30971 cli_runner.go:164] Run: docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 06:55:45.541004   30971 cli_runner.go:211] docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 06:55:45.541111   30971 network_create.go:281] running [docker network inspect multinode-548000] to gather additional debugging logs...
	I0429 06:55:45.541131   30971 cli_runner.go:164] Run: docker network inspect multinode-548000
	W0429 06:55:45.589413   30971 cli_runner.go:211] docker network inspect multinode-548000 returned with exit code 1
	I0429 06:55:45.589448   30971 network_create.go:284] error running [docker network inspect multinode-548000]: docker network inspect multinode-548000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-548000 not found
	I0429 06:55:45.589461   30971 network_create.go:286] output of [docker network inspect multinode-548000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-548000 not found
	
	** /stderr **
	I0429 06:55:45.589585   30971 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 06:55:45.638544   30971 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 06:55:45.640140   30971 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 06:55:45.640462   30971 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002408e00}
	I0429 06:55:45.640480   30971 network_create.go:124] attempt to create docker network multinode-548000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 06:55:45.640542   30971 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000
	I0429 06:55:45.723770   30971 network_create.go:108] docker network multinode-548000 192.168.67.0/24 created
	I0429 06:55:45.723805   30971 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-548000" container
	I0429 06:55:45.723917   30971 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 06:55:45.772470   30971 cli_runner.go:164] Run: docker volume create multinode-548000 --label name.minikube.sigs.k8s.io=multinode-548000 --label created_by.minikube.sigs.k8s.io=true
	I0429 06:55:45.821607   30971 oci.go:103] Successfully created a docker volume multinode-548000
	I0429 06:55:45.821716   30971 cli_runner.go:164] Run: docker run --rm --name multinode-548000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-548000 --entrypoint /usr/bin/test -v multinode-548000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 06:55:46.129196   30971 oci.go:107] Successfully prepared a docker volume multinode-548000
	I0429 06:55:46.129251   30971 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 06:55:46.129263   30971 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 06:55:46.129367   30971 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-548000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 07:01:45.491177   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:01:45.491291   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:01:45.541927   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:01:45.542034   30971 retry.go:31] will retry after 226.577225ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:45.771088   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:01:45.824203   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:01:45.824321   30971 retry.go:31] will retry after 492.279453ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:46.318494   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:01:46.372187   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:01:46.372289   30971 retry.go:31] will retry after 839.364093ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:47.214055   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:01:47.267539   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:01:47.267649   30971 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:01:47.267671   30971 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:47.267726   30971 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:01:47.267777   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:01:47.316203   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:01:47.316295   30971 retry.go:31] will retry after 267.438097ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:47.584732   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:01:47.636292   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:01:47.636383   30971 retry.go:31] will retry after 323.602233ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:47.960431   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:01:48.011185   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:01:48.011290   30971 retry.go:31] will retry after 792.718937ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:48.804693   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:01:48.857857   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:01:48.857957   30971 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:01:48.857971   30971 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:48.857989   30971 start.go:128] duration metric: took 6m3.409457787s to createHost
	I0429 07:01:48.857996   30971 start.go:83] releasing machines lock for "multinode-548000", held for 6m3.409600685s
	W0429 07:01:48.858011   30971 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0429 07:01:48.858430   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:01:48.906346   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:01:48.906402   30971 delete.go:82] Unable to get host status for multinode-548000, assuming it has already been deleted: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	W0429 07:01:48.906472   30971 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0429 07:01:48.906483   30971 start.go:728] Will try again in 5 seconds ...
	I0429 07:01:53.908237   30971 start.go:360] acquireMachinesLock for multinode-548000: {Name:mkf8e57cc3eeb260fdebcc4e317197efd6f66b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:01:53.908891   30971 start.go:364] duration metric: took 603.111µs to acquireMachinesLock for "multinode-548000"
	I0429 07:01:53.909090   30971 start.go:96] Skipping create...Using existing machine configuration
	I0429 07:01:53.909112   30971 fix.go:54] fixHost starting: 
	I0429 07:01:53.909508   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:01:53.963188   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:01:53.963228   30971 fix.go:112] recreateIfNeeded on multinode-548000: state= err=unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:53.963247   30971 fix.go:117] machineExists: false. err=machine does not exist
	I0429 07:01:53.984864   30971 out.go:177] * docker "multinode-548000" container is missing, will recreate.
	I0429 07:01:54.026729   30971 delete.go:124] DEMOLISHING multinode-548000 ...
	I0429 07:01:54.026912   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:01:54.075267   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	W0429 07:01:54.075324   30971 stop.go:83] unable to get state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:54.075345   30971 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:54.075729   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:01:54.123770   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:01:54.123846   30971 delete.go:82] Unable to get host status for multinode-548000, assuming it has already been deleted: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:54.123928   30971 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-548000
	W0429 07:01:54.170799   30971 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-548000 returned with exit code 1
	I0429 07:01:54.170827   30971 kic.go:371] could not find the container multinode-548000 to remove it. will try anyways
	I0429 07:01:54.170896   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:01:54.218244   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	W0429 07:01:54.218294   30971 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:54.218371   30971 cli_runner.go:164] Run: docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0"
	W0429 07:01:54.266937   30971 cli_runner.go:211] docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 07:01:54.266979   30971 oci.go:650] error shutdown multinode-548000: docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:55.269382   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:01:55.319661   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:01:55.319716   30971 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:55.319734   30971 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:01:55.319767   30971 retry.go:31] will retry after 440.034916ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:55.761318   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:01:55.813961   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:01:55.814006   30971 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:55.814014   30971 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:01:55.814039   30971 retry.go:31] will retry after 968.279055ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:56.784723   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:01:56.836455   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:01:56.836500   30971 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:56.836510   30971 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:01:56.836535   30971 retry.go:31] will retry after 792.550239ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:57.630360   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:01:57.680792   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:01:57.680834   30971 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:01:57.680843   30971 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:01:57.680868   30971 retry.go:31] will retry after 2.355539936s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:02:00.038778   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:02:00.091296   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:02:00.091340   30971 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:02:00.091355   30971 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:02:00.091380   30971 retry.go:31] will retry after 2.349224822s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:02:02.442975   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:02:02.493024   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:02:02.493068   30971 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:02:02.493077   30971 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:02:02.493100   30971 retry.go:31] will retry after 3.739734206s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:02:06.235220   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:02:06.286272   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:02:06.286314   30971 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:02:06.286322   30971 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:02:06.286349   30971 retry.go:31] will retry after 7.410232748s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:02:13.698910   30971 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:02:13.751913   30971 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:02:13.751962   30971 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:02:13.751978   30971 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:02:13.752005   30971 oci.go:88] couldn't shut down multinode-548000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	 
	I0429 07:02:13.752083   30971 cli_runner.go:164] Run: docker rm -f -v multinode-548000
	I0429 07:02:13.800964   30971 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-548000
	W0429 07:02:13.848929   30971 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-548000 returned with exit code 1
	I0429 07:02:13.849044   30971 cli_runner.go:164] Run: docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:02:13.897359   30971 cli_runner.go:164] Run: docker network rm multinode-548000
	I0429 07:02:13.997252   30971 fix.go:124] Sleeping 1 second for extra luck!
	I0429 07:02:14.997376   30971 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:02:15.018241   30971 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 07:02:15.018417   30971 start.go:159] libmachine.API.Create for "multinode-548000" (driver="docker")
	I0429 07:02:15.018445   30971 client.go:168] LocalClient.Create starting
	I0429 07:02:15.018626   30971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:02:15.018707   30971 main.go:141] libmachine: Decoding PEM data...
	I0429 07:02:15.018731   30971 main.go:141] libmachine: Parsing certificate...
	I0429 07:02:15.018801   30971 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:02:15.018865   30971 main.go:141] libmachine: Decoding PEM data...
	I0429 07:02:15.018878   30971 main.go:141] libmachine: Parsing certificate...
	I0429 07:02:15.019417   30971 cli_runner.go:164] Run: docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:02:15.073295   30971 cli_runner.go:211] docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:02:15.073379   30971 network_create.go:281] running [docker network inspect multinode-548000] to gather additional debugging logs...
	I0429 07:02:15.073404   30971 cli_runner.go:164] Run: docker network inspect multinode-548000
	W0429 07:02:15.121815   30971 cli_runner.go:211] docker network inspect multinode-548000 returned with exit code 1
	I0429 07:02:15.121841   30971 network_create.go:284] error running [docker network inspect multinode-548000]: docker network inspect multinode-548000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-548000 not found
	I0429 07:02:15.121864   30971 network_create.go:286] output of [docker network inspect multinode-548000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-548000 not found
	
	** /stderr **
	I0429 07:02:15.121999   30971 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:02:15.172229   30971 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:02:15.173834   30971 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:02:15.175377   30971 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:02:15.175710   30971 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e618c0}
	I0429 07:02:15.175724   30971 network_create.go:124] attempt to create docker network multinode-548000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0429 07:02:15.175794   30971 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000
	W0429 07:02:15.224903   30971 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000 returned with exit code 1
	W0429 07:02:15.224940   30971 network_create.go:149] failed to create docker network multinode-548000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0429 07:02:15.224957   30971 network_create.go:116] failed to create docker network multinode-548000 192.168.76.0/24, will retry: subnet is taken
	I0429 07:02:15.226280   30971 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:02:15.226862   30971 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000537850}
	I0429 07:02:15.226877   30971 network_create.go:124] attempt to create docker network multinode-548000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0429 07:02:15.226967   30971 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000
	I0429 07:02:15.311888   30971 network_create.go:108] docker network multinode-548000 192.168.85.0/24 created
	I0429 07:02:15.311924   30971 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-548000" container
	I0429 07:02:15.312023   30971 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:02:15.362203   30971 cli_runner.go:164] Run: docker volume create multinode-548000 --label name.minikube.sigs.k8s.io=multinode-548000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:02:15.410382   30971 oci.go:103] Successfully created a docker volume multinode-548000
	I0429 07:02:15.410509   30971 cli_runner.go:164] Run: docker run --rm --name multinode-548000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-548000 --entrypoint /usr/bin/test -v multinode-548000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:02:15.661766   30971 oci.go:107] Successfully prepared a docker volume multinode-548000
	I0429 07:02:15.661798   30971 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:02:15.661811   30971 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:02:15.661917   30971 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-548000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 07:08:15.077986   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:08:15.078120   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:15.130895   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:15.131022   30971 retry.go:31] will retry after 139.743684ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:15.273139   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:15.322348   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:15.322444   30971 retry.go:31] will retry after 428.718743ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:15.753537   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:15.807988   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:15.808097   30971 retry.go:31] will retry after 578.97668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:16.389485   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:16.440617   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:08:16.440725   30971 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:08:16.440746   30971 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:16.440801   30971 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:08:16.440854   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:16.488987   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:16.489095   30971 retry.go:31] will retry after 164.976407ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:16.656450   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:16.709265   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:16.709372   30971 retry.go:31] will retry after 436.124595ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:17.147919   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:17.200908   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:17.201006   30971 retry.go:31] will retry after 582.405983ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:17.785622   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:17.836095   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:08:17.836202   30971 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:08:17.836221   30971 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:17.836232   30971 start.go:128] duration metric: took 6m2.78141178s to createHost
	I0429 07:08:17.836299   30971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:08:17.836362   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:17.884689   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:17.884786   30971 retry.go:31] will retry after 271.527528ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:18.158650   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:18.211581   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:18.211674   30971 retry.go:31] will retry after 516.595871ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:18.730635   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:18.782660   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:18.782749   30971 retry.go:31] will retry after 304.849968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:19.089215   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:19.143491   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:08:19.143593   30971 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:08:19.143616   30971 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:19.143668   30971 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:08:19.143728   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:19.191664   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:19.191755   30971 retry.go:31] will retry after 276.575107ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:19.470768   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:19.522381   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:19.522484   30971 retry.go:31] will retry after 547.692943ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:20.070906   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:20.125698   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:08:20.125794   30971 retry.go:31] will retry after 777.691886ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:20.905835   30971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:08:20.956965   30971 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:08:20.957071   30971 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:08:20.957092   30971 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:08:20.957101   30971 fix.go:56] duration metric: took 6m26.99057201s for fixHost
	I0429 07:08:20.957108   30971 start.go:83] releasing machines lock for "multinode-548000", held for 6m26.990641109s
	W0429 07:08:20.957180   30971 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-548000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-548000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 07:08:20.999339   30971 out.go:177] 
	W0429 07:08:21.020532   30971 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 07:08:21.020576   30971 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 07:08:21.020657   30971 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 07:08:21.041439   30971 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-548000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "9bfb80ac7e885d7050889343b72c85908c693085957e6798ecb9109c47f6cb69",
	        "Created": "2024-04-29T14:02:15.271490125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (186.118889ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:08:21.320729   31378 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (756.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (118.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (111.361557ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-548000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- rollout status deployment/busybox: exit status 1 (109.420612ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.438166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (116.633361ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.708401ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (113.846019ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (113.701145ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (115.701723ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (114.914853ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.548816ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0429 07:09:06.819982   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (113.063989ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0429 07:09:22.355219   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.38908ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (113.272699ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (110.241491ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- exec  -- nslookup kubernetes.io: exit status 1 (109.539328ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- exec  -- nslookup kubernetes.default: exit status 1 (109.173708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (110.47866ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "9bfb80ac7e885d7050889343b72c85908c693085957e6798ecb9109c47f6cb69",
	        "Created": "2024-04-29T14:02:15.271490125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (114.592625ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:10:20.024990   31456 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (118.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-548000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (109.875272ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-548000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "9bfb80ac7e885d7050889343b72c85908c693085957e6798ecb9109c47f6cb69",
	        "Created": "2024-04-29T14:02:15.271490125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (115.571907ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:10:20.302735   31465 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-548000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-548000 -v 3 --alsologtostderr: exit status 80 (198.283504ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:20.367751   31469 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:20.367965   31469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:20.367971   31469 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:20.367974   31469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:20.368157   31469 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:20.368501   31469 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:20.368793   31469 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:20.369170   31469 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:20.416454   31469 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:20.438163   31469 out.go:177] 
	W0429 07:10:20.458540   31469 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-548000 host status: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-548000 host status: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	I0429 07:10:20.479416   31469 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-548000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "9bfb80ac7e885d7050889343b72c85908c693085957e6798ecb9109c47f6cb69",
	        "Created": "2024-04-29T14:02:15.271490125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (115.387255ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:10:20.668753   31475 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-548000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-548000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (36.575087ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-548000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-548000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-548000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "9bfb80ac7e885d7050889343b72c85908c693085957e6798ecb9109c47f6cb69",
	        "Created": "2024-04-29T14:02:15.271490125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (114.66388ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:10:20.872473   31482 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-548000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-791000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-548000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-548000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-548000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "9bfb80ac7e885d7050889343b72c85908c693085957e6798ecb9109c47f6cb69",
	        "Created": "2024-04-29T14:02:15.271490125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (115.174444ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:10:21.227061   31494 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status --output json --alsologtostderr: exit status 7 (120.229651ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-548000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:21.291349   31498 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:21.297551   31498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:21.297561   31498 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:21.297566   31498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:21.297839   31498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:21.298082   31498 out.go:298] Setting JSON to true
	I0429 07:10:21.298112   31498 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:21.298161   31498 notify.go:220] Checking for updates...
	I0429 07:10:21.298539   31498 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:21.298550   31498 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:21.298929   31498 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:21.347659   31498 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:21.347706   31498 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:21.347726   31498 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:21.347744   31498 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:21.347751   31498 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-548000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "9bfb80ac7e885d7050889343b72c85908c693085957e6798ecb9109c47f6cb69",
	        "Created": "2024-04-29T14:02:15.271490125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (115.14941ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:10:21.514879   31504 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 node stop m03: exit status 85 (158.510681ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-548000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status: exit status 7 (114.874684ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:10:21.788912   31510 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:21.788925   31510 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr: exit status 7 (114.687461ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:21.853579   31514 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:21.853773   31514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:21.853783   31514 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:21.853800   31514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:21.853989   31514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:21.854194   31514 out.go:298] Setting JSON to false
	I0429 07:10:21.854216   31514 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:21.854254   31514 notify.go:220] Checking for updates...
	I0429 07:10:21.854500   31514 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:21.854514   31514 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:21.854901   31514 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:21.903556   31514 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:21.903617   31514 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:21.903649   31514 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:21.903667   31514 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:21.903674   31514 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr": multinode-548000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr": multinode-548000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr": multinode-548000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "9bfb80ac7e885d7050889343b72c85908c693085957e6798ecb9109c47f6cb69",
	        "Created": "2024-04-29T14:02:15.271490125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (115.663656ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:10:22.071126   31520 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (55.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 node start m03 -v=7 --alsologtostderr: exit status 85 (155.759683ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:22.135325   31524 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:22.135963   31524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:22.135972   31524 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:22.135978   31524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:22.136559   31524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:22.136922   31524 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:22.137172   31524 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:22.157968   31524 out.go:177] 
	W0429 07:10:22.179088   31524 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0429 07:10:22.179113   31524 out.go:239] * 
	* 
	W0429 07:10:22.185187   31524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 07:10:22.205994   31524 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0429 07:10:22.135325   31524 out.go:291] Setting OutFile to fd 1 ...
I0429 07:10:22.135963   31524 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 07:10:22.135972   31524 out.go:304] Setting ErrFile to fd 2...
I0429 07:10:22.135978   31524 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 07:10:22.136559   31524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
I0429 07:10:22.136922   31524 mustload.go:65] Loading cluster: multinode-548000
I0429 07:10:22.137172   31524 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 07:10:22.157968   31524 out.go:177] 
W0429 07:10:22.179088   31524 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0429 07:10:22.179113   31524 out.go:239] * 
* 
W0429 07:10:22.185187   31524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0429 07:10:22.205994   31524 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-548000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr: exit status 7 (114.433375ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:22.291721   31526 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:22.292006   31526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:22.292012   31526 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:22.292015   31526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:22.292221   31526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:22.292400   31526 out.go:298] Setting JSON to false
	I0429 07:10:22.292421   31526 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:22.292458   31526 notify.go:220] Checking for updates...
	I0429 07:10:22.292715   31526 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:22.292729   31526 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:22.293099   31526 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:22.341654   31526 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:22.341718   31526 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:22.341742   31526 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:22.341761   31526 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:22.341768   31526 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr: exit status 7 (120.280569ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:23.111157   31530 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:23.111445   31530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:23.111451   31530 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:23.111454   31530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:23.111623   31530 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:23.111805   31530 out.go:298] Setting JSON to false
	I0429 07:10:23.111830   31530 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:23.111868   31530 notify.go:220] Checking for updates...
	I0429 07:10:23.112905   31530 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:23.113133   31530 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:23.113533   31530 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:23.161990   31530 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:23.162058   31530 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:23.162079   31530 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:23.162100   31530 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:23.162107   31530 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr: exit status 7 (118.985466ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:25.285212   31534 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:25.285996   31534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:25.286024   31534 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:25.286036   31534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:25.286566   31534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:25.286767   31534 out.go:298] Setting JSON to false
	I0429 07:10:25.286789   31534 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:25.286830   31534 notify.go:220] Checking for updates...
	I0429 07:10:25.287048   31534 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:25.287062   31534 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:25.287426   31534 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:25.337075   31534 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:25.337124   31534 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:25.337147   31534 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:25.337162   31534 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:25.337173   31534 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr: exit status 7 (122.117848ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:27.477064   31538 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:27.477300   31538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:27.477306   31538 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:27.477309   31538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:27.477505   31538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:27.477703   31538 out.go:298] Setting JSON to false
	I0429 07:10:27.477725   31538 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:27.477765   31538 notify.go:220] Checking for updates...
	I0429 07:10:27.478009   31538 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:27.478023   31538 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:27.479334   31538 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:27.531590   31538 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:27.531652   31538 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:27.531670   31538 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:27.531688   31538 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:27.531699   31538 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr: exit status 7 (117.537438ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:30.452442   31546 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:30.452660   31546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:30.452666   31546 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:30.452669   31546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:30.452847   31546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:30.453032   31546 out.go:298] Setting JSON to false
	I0429 07:10:30.453061   31546 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:30.453104   31546 notify.go:220] Checking for updates...
	I0429 07:10:30.453339   31546 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:30.453354   31546 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:30.453749   31546 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:30.502667   31546 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:30.502731   31546 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:30.502753   31546 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:30.502772   31546 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:30.502779   31546 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr: exit status 7 (119.473288ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:36.889492   31550 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:36.889686   31550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:36.889692   31550 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:36.889696   31550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:36.889868   31550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:36.890033   31550 out.go:298] Setting JSON to false
	I0429 07:10:36.890056   31550 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:36.890092   31550 notify.go:220] Checking for updates...
	I0429 07:10:36.890317   31550 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:36.890332   31550 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:36.890752   31550 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:36.940543   31550 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:36.940600   31550 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:36.940622   31550 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:36.940643   31550 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:36.940653   31550 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr: exit status 7 (117.969847ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:47.024636   31560 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:47.024942   31560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:47.024948   31560 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:47.024951   31560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:47.025126   31560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:47.025309   31560 out.go:298] Setting JSON to false
	I0429 07:10:47.025331   31560 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:47.025371   31560 notify.go:220] Checking for updates...
	I0429 07:10:47.025649   31560 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:47.025660   31560 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:47.026032   31560 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:47.074258   31560 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:47.074319   31560 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:47.074342   31560 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:47.074359   31560 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:47.074367   31560 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr: exit status 7 (121.315347ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:10:57.003274   31565 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:10:57.003463   31565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:57.003469   31565 out.go:304] Setting ErrFile to fd 2...
	I0429 07:10:57.003472   31565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:10:57.003669   31565 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:10:57.003838   31565 out.go:298] Setting JSON to false
	I0429 07:10:57.003861   31565 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:10:57.003899   31565 notify.go:220] Checking for updates...
	I0429 07:10:57.004136   31565 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:10:57.004148   31565 status.go:255] checking status of multinode-548000 ...
	I0429 07:10:57.004531   31565 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:10:57.054197   31565 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:10:57.054267   31565 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:10:57.054286   31565 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:10:57.054301   31565 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:10:57.054309   31565 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr: exit status 7 (116.650067ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:11:17.233311   31571 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:11:17.233510   31571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:11:17.233516   31571 out.go:304] Setting ErrFile to fd 2...
	I0429 07:11:17.233519   31571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:11:17.233698   31571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:11:17.233879   31571 out.go:298] Setting JSON to false
	I0429 07:11:17.233901   31571 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:11:17.233948   31571 notify.go:220] Checking for updates...
	I0429 07:11:17.234220   31571 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:11:17.234234   31571 status.go:255] checking status of multinode-548000 ...
	I0429 07:11:17.234618   31571 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:17.282262   31571 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:17.282328   31571 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:11:17.282348   31571 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:11:17.282372   31571 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:11:17.282379   31571 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-548000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "9bfb80ac7e885d7050889343b72c85908c693085957e6798ecb9109c47f6cb69",
	        "Created": "2024-04-29T14:02:15.271490125Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (114.722034ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:11:17.449445   31577 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (55.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (787.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-548000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-548000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-548000: exit status 82 (14.977277033s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-548000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-548000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-548000 --wait=true -v=8 --alsologtostderr
E0429 07:13:49.951965   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:14:06.820153   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:14:22.355627   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:19:05.406807   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:19:06.819604   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:19:22.356126   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:24:06.882461   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:24:22.419273   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-548000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m52.467118689s)

                                                
                                                
-- stdout --
	* [multinode-548000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-548000" primary control-plane node in "multinode-548000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* docker "multinode-548000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-548000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:11:32.556344   31601 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:11:32.557013   31601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:11:32.557019   31601 out.go:304] Setting ErrFile to fd 2...
	I0429 07:11:32.557023   31601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:11:32.557411   31601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:11:32.559221   31601 out.go:298] Setting JSON to false
	I0429 07:11:32.581385   31601 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":18666,"bootTime":1714381226,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 07:11:32.581482   31601 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 07:11:32.603544   31601 out.go:177] * [multinode-548000] minikube v1.33.0 on Darwin 14.4.1
	I0429 07:11:32.645353   31601 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 07:11:32.645419   31601 notify.go:220] Checking for updates...
	I0429 07:11:32.667435   31601 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 07:11:32.688220   31601 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 07:11:32.709480   31601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 07:11:32.730270   31601 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	I0429 07:11:32.751243   31601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 07:11:32.772839   31601 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:11:32.772972   31601 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 07:11:32.827702   31601 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 07:11:32.827866   31601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:11:32.936254   31601 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:125 SystemTime:2024-04-29 14:11:32.924502178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:11:32.978499   31601 out.go:177] * Using the docker driver based on existing profile
	I0429 07:11:32.999584   31601 start.go:297] selected driver: docker
	I0429 07:11:32.999620   31601 start.go:901] validating driver "docker" against &{Name:multinode-548000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-548000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 07:11:32.999735   31601 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 07:11:32.999952   31601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:11:33.107446   31601 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:125 SystemTime:2024-04-29 14:11:33.096558898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:11:33.110486   31601 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 07:11:33.110552   31601 cni.go:84] Creating CNI manager for ""
	I0429 07:11:33.110564   31601 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 07:11:33.110658   31601 start.go:340] cluster config:
	{Name:multinode-548000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-548000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 07:11:33.153052   31601 out.go:177] * Starting "multinode-548000" primary control-plane node in "multinode-548000" cluster
	I0429 07:11:33.174028   31601 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 07:11:33.195020   31601 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 07:11:33.237137   31601 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:11:33.237184   31601 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 07:11:33.237225   31601 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 07:11:33.237243   31601 cache.go:56] Caching tarball of preloaded images
	I0429 07:11:33.237456   31601 preload.go:173] Found /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 07:11:33.237478   31601 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 07:11:33.238388   31601 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/multinode-548000/config.json ...
	I0429 07:11:33.290727   31601 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 07:11:33.290745   31601 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 07:11:33.290766   31601 cache.go:194] Successfully downloaded all kic artifacts
	I0429 07:11:33.290804   31601 start.go:360] acquireMachinesLock for multinode-548000: {Name:mkf8e57cc3eeb260fdebcc4e317197efd6f66b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:11:33.290911   31601 start.go:364] duration metric: took 83.308µs to acquireMachinesLock for "multinode-548000"
	I0429 07:11:33.290937   31601 start.go:96] Skipping create...Using existing machine configuration
	I0429 07:11:33.290949   31601 fix.go:54] fixHost starting: 
	I0429 07:11:33.291204   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:33.340324   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:33.340376   31601 fix.go:112] recreateIfNeeded on multinode-548000: state= err=unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:33.340397   31601 fix.go:117] machineExists: false. err=machine does not exist
	I0429 07:11:33.361974   31601 out.go:177] * docker "multinode-548000" container is missing, will recreate.
	I0429 07:11:33.403825   31601 delete.go:124] DEMOLISHING multinode-548000 ...
	I0429 07:11:33.404028   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:33.454520   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	W0429 07:11:33.454586   31601 stop.go:83] unable to get state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:33.454602   31601 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:33.454979   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:33.502520   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:33.502575   31601 delete.go:82] Unable to get host status for multinode-548000, assuming it has already been deleted: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:33.502670   31601 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-548000
	W0429 07:11:33.550271   31601 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-548000 returned with exit code 1
	I0429 07:11:33.550302   31601 kic.go:371] could not find the container multinode-548000 to remove it. will try anyways
	I0429 07:11:33.550368   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:33.598284   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	W0429 07:11:33.598327   31601 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:33.598406   31601 cli_runner.go:164] Run: docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0"
	W0429 07:11:33.646660   31601 cli_runner.go:211] docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 07:11:33.646689   31601 oci.go:650] error shutdown multinode-548000: docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:34.649063   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:34.701855   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:34.701901   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:34.701916   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:11:34.701953   31601 retry.go:31] will retry after 451.934399ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:35.154974   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:35.205959   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:35.206001   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:35.206014   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:11:35.206037   31601 retry.go:31] will retry after 1.000727392s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:36.207465   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:36.258957   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:36.258999   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:36.259011   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:11:36.259036   31601 retry.go:31] will retry after 1.581502196s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:37.841417   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:37.893789   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:37.893831   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:37.893839   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:11:37.893863   31601 retry.go:31] will retry after 1.213356508s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:39.108109   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:39.160278   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:39.160322   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:39.160332   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:11:39.160356   31601 retry.go:31] will retry after 1.475331364s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:40.638026   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:40.690777   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:40.690817   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:40.690824   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:11:40.690848   31601 retry.go:31] will retry after 3.825678296s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:44.518185   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:44.569116   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:44.569158   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:44.569168   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:11:44.569191   31601 retry.go:31] will retry after 3.616313441s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:48.187314   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:11:48.237630   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:11:48.237671   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:11:48.237684   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:11:48.237714   31601 oci.go:88] couldn't shut down multinode-548000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	 
	I0429 07:11:48.237794   31601 cli_runner.go:164] Run: docker rm -f -v multinode-548000
	I0429 07:11:48.287531   31601 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-548000
	W0429 07:11:48.336452   31601 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-548000 returned with exit code 1
	I0429 07:11:48.336574   31601 cli_runner.go:164] Run: docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:11:48.386370   31601 cli_runner.go:164] Run: docker network rm multinode-548000
	I0429 07:11:48.492888   31601 fix.go:124] Sleeping 1 second for extra luck!
	I0429 07:11:49.493121   31601 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:11:49.517694   31601 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 07:11:49.517870   31601 start.go:159] libmachine.API.Create for "multinode-548000" (driver="docker")
	I0429 07:11:49.517916   31601 client.go:168] LocalClient.Create starting
	I0429 07:11:49.518131   31601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:11:49.518230   31601 main.go:141] libmachine: Decoding PEM data...
	I0429 07:11:49.518268   31601 main.go:141] libmachine: Parsing certificate...
	I0429 07:11:49.518376   31601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:11:49.518473   31601 main.go:141] libmachine: Decoding PEM data...
	I0429 07:11:49.518488   31601 main.go:141] libmachine: Parsing certificate...
	I0429 07:11:49.538486   31601 cli_runner.go:164] Run: docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:11:49.590073   31601 cli_runner.go:211] docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:11:49.590156   31601 network_create.go:281] running [docker network inspect multinode-548000] to gather additional debugging logs...
	I0429 07:11:49.590171   31601 cli_runner.go:164] Run: docker network inspect multinode-548000
	W0429 07:11:49.639602   31601 cli_runner.go:211] docker network inspect multinode-548000 returned with exit code 1
	I0429 07:11:49.639628   31601 network_create.go:284] error running [docker network inspect multinode-548000]: docker network inspect multinode-548000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-548000 not found
	I0429 07:11:49.639638   31601 network_create.go:286] output of [docker network inspect multinode-548000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-548000 not found
	
	** /stderr **
	I0429 07:11:49.639758   31601 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:11:49.689683   31601 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:11:49.691297   31601 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:11:49.691641   31601 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023cab70}
	I0429 07:11:49.691659   31601 network_create.go:124] attempt to create docker network multinode-548000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 07:11:49.691723   31601 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000
	I0429 07:11:49.776817   31601 network_create.go:108] docker network multinode-548000 192.168.67.0/24 created
	I0429 07:11:49.776854   31601 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-548000" container
	I0429 07:11:49.776954   31601 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:11:49.827305   31601 cli_runner.go:164] Run: docker volume create multinode-548000 --label name.minikube.sigs.k8s.io=multinode-548000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:11:49.875649   31601 oci.go:103] Successfully created a docker volume multinode-548000
	I0429 07:11:49.875760   31601 cli_runner.go:164] Run: docker run --rm --name multinode-548000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-548000 --entrypoint /usr/bin/test -v multinode-548000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:11:50.125348   31601 oci.go:107] Successfully prepared a docker volume multinode-548000
	I0429 07:11:50.125405   31601 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:11:50.125418   31601 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:11:50.125518   31601 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-548000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 07:17:49.518548   31601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:17:49.518682   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:49.572049   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:49.572172   31601 retry.go:31] will retry after 320.213123ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:49.893722   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:49.945626   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:49.945735   31601 retry.go:31] will retry after 347.348706ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:50.295545   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:50.347636   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:50.347745   31601 retry.go:31] will retry after 596.954551ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:50.947056   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:50.997781   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:17:50.997886   31601 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:17:50.997914   31601 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:50.997969   31601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:17:50.998025   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:51.047824   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:51.047924   31601 retry.go:31] will retry after 367.858701ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:51.418042   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:51.469463   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:51.469563   31601 retry.go:31] will retry after 554.554527ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:52.024338   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:52.075457   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:52.075566   31601 retry.go:31] will retry after 537.387084ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:52.614183   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:52.664115   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:17:52.664217   31601 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:17:52.664233   31601 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:52.664243   31601 start.go:128] duration metric: took 6m3.170820379s to createHost
	I0429 07:17:52.664314   31601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:17:52.664373   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:52.714182   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:52.714274   31601 retry.go:31] will retry after 272.233764ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:52.988889   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:53.040035   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:53.040128   31601 retry.go:31] will retry after 208.279096ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:53.249471   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:53.303985   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:53.304074   31601 retry.go:31] will retry after 772.240523ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:54.078670   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:54.130753   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:17:54.130855   31601 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:17:54.130869   31601 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:54.130930   31601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:17:54.130991   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:54.178451   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:54.178540   31601 retry.go:31] will retry after 298.093959ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:54.479014   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:54.532108   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:54.532202   31601 retry.go:31] will retry after 402.160065ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:54.936719   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:54.988379   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:17:54.988468   31601 retry.go:31] will retry after 371.358898ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:55.361394   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:17:55.412794   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:17:55.412901   31601 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:17:55.412918   31601 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:17:55.412928   31601 fix.go:56] duration metric: took 6m22.121767083s for fixHost
	I0429 07:17:55.412934   31601 start.go:83] releasing machines lock for "multinode-548000", held for 6m22.121799896s
	W0429 07:17:55.412953   31601 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 07:17:55.413017   31601 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 07:17:55.413024   31601 start.go:728] Will try again in 5 seconds ...
	I0429 07:18:00.414563   31601 start.go:360] acquireMachinesLock for multinode-548000: {Name:mkf8e57cc3eeb260fdebcc4e317197efd6f66b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:18:00.414796   31601 start.go:364] duration metric: took 188.026µs to acquireMachinesLock for "multinode-548000"
	I0429 07:18:00.414836   31601 start.go:96] Skipping create...Using existing machine configuration
	I0429 07:18:00.414844   31601 fix.go:54] fixHost starting: 
	I0429 07:18:00.415327   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:00.467186   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:00.467233   31601 fix.go:112] recreateIfNeeded on multinode-548000: state= err=unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:00.467256   31601 fix.go:117] machineExists: false. err=machine does not exist
	I0429 07:18:00.488825   31601 out.go:177] * docker "multinode-548000" container is missing, will recreate.
	I0429 07:18:00.530699   31601 delete.go:124] DEMOLISHING multinode-548000 ...
	I0429 07:18:00.530891   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:00.580965   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	W0429 07:18:00.581010   31601 stop.go:83] unable to get state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:00.581029   31601 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:00.581393   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:00.629832   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:00.629879   31601 delete.go:82] Unable to get host status for multinode-548000, assuming it has already been deleted: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:00.629961   31601 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-548000
	W0429 07:18:00.678393   31601 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-548000 returned with exit code 1
	I0429 07:18:00.678423   31601 kic.go:371] could not find the container multinode-548000 to remove it. will try anyways
	I0429 07:18:00.678492   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:00.727223   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	W0429 07:18:00.727266   31601 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:00.727355   31601 cli_runner.go:164] Run: docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0"
	W0429 07:18:00.776826   31601 cli_runner.go:211] docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 07:18:00.776856   31601 oci.go:650] error shutdown multinode-548000: docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:01.778179   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:01.827353   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:01.827397   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:01.827404   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:18:01.827427   31601 retry.go:31] will retry after 702.025148ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:02.531454   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:02.586736   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:02.586782   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:02.586792   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:18:02.586814   31601 retry.go:31] will retry after 923.532228ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:03.511562   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:03.562925   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:03.562974   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:03.562984   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:18:03.563018   31601 retry.go:31] will retry after 744.555648ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:04.308402   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:04.358539   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:04.358601   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:04.358612   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:18:04.358635   31601 retry.go:31] will retry after 893.246685ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:05.254220   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:05.306691   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:05.306736   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:05.306745   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:18:05.306769   31601 retry.go:31] will retry after 2.923215865s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:08.231700   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:08.283821   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:08.283865   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:08.283875   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:18:08.283900   31601 retry.go:31] will retry after 5.004487494s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:13.290712   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:13.342955   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:13.342998   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:13.343008   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:18:13.343033   31601 retry.go:31] will retry after 4.19083357s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:17.536188   31601 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:18:17.588415   31601 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:18:17.588460   31601 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:18:17.588470   31601 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:18:17.588498   31601 oci.go:88] couldn't shut down multinode-548000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	 
	I0429 07:18:17.588578   31601 cli_runner.go:164] Run: docker rm -f -v multinode-548000
	I0429 07:18:17.638050   31601 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-548000
	W0429 07:18:17.686472   31601 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-548000 returned with exit code 1
	I0429 07:18:17.686582   31601 cli_runner.go:164] Run: docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:18:17.735732   31601 cli_runner.go:164] Run: docker network rm multinode-548000
	I0429 07:18:17.837328   31601 fix.go:124] Sleeping 1 second for extra luck!
	I0429 07:18:18.839553   31601 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:18:18.861761   31601 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 07:18:18.861954   31601 start.go:159] libmachine.API.Create for "multinode-548000" (driver="docker")
	I0429 07:18:18.861979   31601 client.go:168] LocalClient.Create starting
	I0429 07:18:18.862190   31601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:18:18.862285   31601 main.go:141] libmachine: Decoding PEM data...
	I0429 07:18:18.862310   31601 main.go:141] libmachine: Parsing certificate...
	I0429 07:18:18.862397   31601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:18:18.862471   31601 main.go:141] libmachine: Decoding PEM data...
	I0429 07:18:18.862486   31601 main.go:141] libmachine: Parsing certificate...
	I0429 07:18:18.863219   31601 cli_runner.go:164] Run: docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:18:18.917225   31601 cli_runner.go:211] docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:18:18.917317   31601 network_create.go:281] running [docker network inspect multinode-548000] to gather additional debugging logs...
	I0429 07:18:18.917335   31601 cli_runner.go:164] Run: docker network inspect multinode-548000
	W0429 07:18:18.965834   31601 cli_runner.go:211] docker network inspect multinode-548000 returned with exit code 1
	I0429 07:18:18.965868   31601 network_create.go:284] error running [docker network inspect multinode-548000]: docker network inspect multinode-548000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-548000 not found
	I0429 07:18:18.965880   31601 network_create.go:286] output of [docker network inspect multinode-548000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-548000 not found
	
	** /stderr **
	I0429 07:18:18.965993   31601 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:18:19.016445   31601 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:18:19.018191   31601 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:18:19.019917   31601 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:18:19.020238   31601 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00076bbb0}
	I0429 07:18:19.020253   31601 network_create.go:124] attempt to create docker network multinode-548000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0429 07:18:19.020324   31601 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000
	W0429 07:18:19.068804   31601 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000 returned with exit code 1
	W0429 07:18:19.068854   31601 network_create.go:149] failed to create docker network multinode-548000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0429 07:18:19.068869   31601 network_create.go:116] failed to create docker network multinode-548000 192.168.76.0/24, will retry: subnet is taken
	I0429 07:18:19.070443   31601 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:18:19.070822   31601 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000806cf0}
	I0429 07:18:19.070834   31601 network_create.go:124] attempt to create docker network multinode-548000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0429 07:18:19.070904   31601 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000
	I0429 07:18:19.154950   31601 network_create.go:108] docker network multinode-548000 192.168.85.0/24 created
	I0429 07:18:19.154982   31601 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-548000" container
	I0429 07:18:19.155096   31601 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:18:19.204567   31601 cli_runner.go:164] Run: docker volume create multinode-548000 --label name.minikube.sigs.k8s.io=multinode-548000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:18:19.251521   31601 oci.go:103] Successfully created a docker volume multinode-548000
	I0429 07:18:19.251637   31601 cli_runner.go:164] Run: docker run --rm --name multinode-548000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-548000 --entrypoint /usr/bin/test -v multinode-548000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:18:19.500850   31601 oci.go:107] Successfully prepared a docker volume multinode-548000
	I0429 07:18:19.500879   31601 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:18:19.500892   31601 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:18:19.500995   31601 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-548000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 07:24:18.927759   31601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:24:18.927884   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:18.980823   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:18.980943   31601 retry.go:31] will retry after 314.246721ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:19.297655   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:19.349616   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:19.349717   31601 retry.go:31] will retry after 538.802127ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:19.889886   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:19.942574   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:19.942686   31601 retry.go:31] will retry after 355.35866ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:20.300319   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:20.352104   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:24:20.352211   31601 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:24:20.352230   31601 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:20.352296   31601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:24:20.352349   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:20.401864   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:20.401958   31601 retry.go:31] will retry after 298.057296ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:20.700421   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:20.752962   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:20.753073   31601 retry.go:31] will retry after 522.798225ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:21.277966   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:21.328292   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:21.328392   31601 retry.go:31] will retry after 473.885202ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:21.804693   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:21.856868   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:24:21.856981   31601 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:24:21.856994   31601 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:21.857006   31601 start.go:128] duration metric: took 6m2.953966183s to createHost
	I0429 07:24:21.857071   31601 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 07:24:21.857124   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:21.905679   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:21.905773   31601 retry.go:31] will retry after 208.2351ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:22.116398   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:22.169554   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:22.169649   31601 retry.go:31] will retry after 500.833408ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:22.672554   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:22.723316   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:22.723411   31601 retry.go:31] will retry after 392.197219ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:23.118062   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:23.170639   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:24:23.170746   31601 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:24:23.170763   31601 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:23.170817   31601 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 07:24:23.170870   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:23.218971   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:23.219067   31601 retry.go:31] will retry after 162.609434ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:23.384095   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:23.435852   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:23.435943   31601 retry.go:31] will retry after 337.05585ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:23.773762   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:23.825253   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:23.825346   31601 retry.go:31] will retry after 419.19368ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:24.246941   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:24.300004   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	I0429 07:24:24.300101   31601 retry.go:31] will retry after 523.442287ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:24.825949   31601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000
	W0429 07:24:24.877678   31601 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000 returned with exit code 1
	W0429 07:24:24.877775   31601 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	W0429 07:24:24.877795   31601 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-548000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:24.877807   31601 fix.go:56] duration metric: took 6m24.399469673s for fixHost
	I0429 07:24:24.877814   31601 start.go:83] releasing machines lock for "multinode-548000", held for 6m24.399512878s
	W0429 07:24:24.877892   31601 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-548000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-548000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0429 07:24:24.920071   31601 out.go:177] 
	W0429 07:24:24.941201   31601 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0429 07:24:24.941253   31601 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0429 07:24:24.941278   31601 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0429 07:24:24.962973   31601 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-548000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-548000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "960bb922aa11983408eb11945f0ec6c32599e7446da6abfe8450c68d8a182156",
	        "Created": "2024-04-29T14:18:19.115853116Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (114.002097ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:24:25.273721   31977 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (787.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 node delete m03: exit status 80 (199.552575ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-548000 host status: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-548000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr: exit status 7 (119.212778ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:24:25.539542   31985 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:24:25.539827   31985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:24:25.539833   31985 out.go:304] Setting ErrFile to fd 2...
	I0429 07:24:25.539837   31985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:24:25.540020   31985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:24:25.540214   31985 out.go:298] Setting JSON to false
	I0429 07:24:25.540236   31985 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:24:25.540273   31985 notify.go:220] Checking for updates...
	I0429 07:24:25.541651   31985 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:24:25.541677   31985 status.go:255] checking status of multinode-548000 ...
	I0429 07:24:25.542065   31985 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:25.592595   31985 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:25.592656   31985 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:24:25.592677   31985 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:24:25.592695   31985 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:24:25.592702   31985 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "960bb922aa11983408eb11945f0ec6c32599e7446da6abfe8450c68d8a182156",
	        "Created": "2024-04-29T14:18:19.115853116Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (114.448735ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:24:25.758647   31991 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 stop: exit status 82 (15.158691321s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	* Stopping node "multinode-548000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-548000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-548000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status: exit status 7 (114.781763ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:24:41.032689   32010 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:24:41.032701   32010 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr: exit status 7 (115.379599ms)

                                                
                                                
-- stdout --
	multinode-548000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:24:41.096652   32014 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:24:41.097443   32014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:24:41.097451   32014 out.go:304] Setting ErrFile to fd 2...
	I0429 07:24:41.097456   32014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:24:41.097929   32014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:24:41.098112   32014 out.go:298] Setting JSON to false
	I0429 07:24:41.098135   32014 mustload.go:65] Loading cluster: multinode-548000
	I0429 07:24:41.098168   32014 notify.go:220] Checking for updates...
	I0429 07:24:41.098385   32014 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:24:41.098399   32014 status.go:255] checking status of multinode-548000 ...
	I0429 07:24:41.098787   32014 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:41.148010   32014 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:41.148058   32014 status.go:330] multinode-548000 host status = "" (err=state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	)
	I0429 07:24:41.148079   32014 status.go:257] multinode-548000 status: &{Name:multinode-548000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 07:24:41.148095   32014 status.go:260] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	E0429 07:24:41.148104   32014 status.go:263] The "multinode-548000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr": multinode-548000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-548000 status --alsologtostderr": multinode-548000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "960bb922aa11983408eb11945f0ec6c32599e7446da6abfe8450c68d8a182156",
	        "Created": "2024-04-29T14:18:19.115853116Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (113.694967ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:24:41.314065   32020 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (15.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (63.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-548000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-548000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m3.419691405s)

                                                
                                                
-- stdout --
	* [multinode-548000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-548000" primary control-plane node in "multinode-548000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* docker "multinode-548000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 07:24:41.377778   32024 out.go:291] Setting OutFile to fd 1 ...
	I0429 07:24:41.377990   32024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:24:41.377998   32024 out.go:304] Setting ErrFile to fd 2...
	I0429 07:24:41.378001   32024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 07:24:41.378173   32024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 07:24:41.379549   32024 out.go:298] Setting JSON to false
	I0429 07:24:41.401450   32024 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":19455,"bootTime":1714381226,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 07:24:41.401539   32024 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 07:24:41.422789   32024 out.go:177] * [multinode-548000] minikube v1.33.0 on Darwin 14.4.1
	I0429 07:24:41.464804   32024 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 07:24:41.464831   32024 notify.go:220] Checking for updates...
	I0429 07:24:41.507603   32024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 07:24:41.528688   32024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 07:24:41.549717   32024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 07:24:41.570836   32024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	I0429 07:24:41.591807   32024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 07:24:41.613445   32024 config.go:182] Loaded profile config "multinode-548000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 07:24:41.614258   32024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 07:24:41.668487   32024 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 07:24:41.668644   32024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:24:41.777552   32024 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:145 SystemTime:2024-04-29 14:24:41.766051775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:24:41.819943   32024 out.go:177] * Using the docker driver based on existing profile
	I0429 07:24:41.840972   32024 start.go:297] selected driver: docker
	I0429 07:24:41.841001   32024 start.go:901] validating driver "docker" against &{Name:multinode-548000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-548000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 07:24:41.841146   32024 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 07:24:41.841358   32024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 07:24:41.948739   32024 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:145 SystemTime:2024-04-29 14:24:41.938167015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 07:24:41.951757   32024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 07:24:41.951828   32024 cni.go:84] Creating CNI manager for ""
	I0429 07:24:41.951838   32024 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 07:24:41.951907   32024 start.go:340] cluster config:
	{Name:multinode-548000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-548000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 07:24:41.995172   32024 out.go:177] * Starting "multinode-548000" primary control-plane node in "multinode-548000" cluster
	I0429 07:24:42.016345   32024 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 07:24:42.038377   32024 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 07:24:42.080420   32024 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:24:42.080486   32024 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 07:24:42.080501   32024 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 07:24:42.080524   32024 cache.go:56] Caching tarball of preloaded images
	I0429 07:24:42.080734   32024 preload.go:173] Found /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 07:24:42.080754   32024 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 07:24:42.080882   32024 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/multinode-548000/config.json ...
	I0429 07:24:42.132507   32024 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 07:24:42.132532   32024 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 07:24:42.132553   32024 cache.go:194] Successfully downloaded all kic artifacts
	I0429 07:24:42.132598   32024 start.go:360] acquireMachinesLock for multinode-548000: {Name:mkf8e57cc3eeb260fdebcc4e317197efd6f66b02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 07:24:42.132697   32024 start.go:364] duration metric: took 80.286µs to acquireMachinesLock for "multinode-548000"
	I0429 07:24:42.132720   32024 start.go:96] Skipping create...Using existing machine configuration
	I0429 07:24:42.132730   32024 fix.go:54] fixHost starting: 
	I0429 07:24:42.132977   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:42.182866   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:42.182917   32024 fix.go:112] recreateIfNeeded on multinode-548000: state= err=unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:42.182940   32024 fix.go:117] machineExists: false. err=machine does not exist
	I0429 07:24:42.204770   32024 out.go:177] * docker "multinode-548000" container is missing, will recreate.
	I0429 07:24:42.247306   32024 delete.go:124] DEMOLISHING multinode-548000 ...
	I0429 07:24:42.247490   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:42.297523   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	W0429 07:24:42.297572   32024 stop.go:83] unable to get state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:42.297589   32024 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:42.297950   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:42.346262   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:42.346330   32024 delete.go:82] Unable to get host status for multinode-548000, assuming it has already been deleted: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:42.346417   32024 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-548000
	W0429 07:24:42.393942   32024 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-548000 returned with exit code 1
	I0429 07:24:42.393976   32024 kic.go:371] could not find the container multinode-548000 to remove it. will try anyways
	I0429 07:24:42.394044   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:42.442411   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	W0429 07:24:42.442459   32024 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:42.442532   32024 cli_runner.go:164] Run: docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0"
	W0429 07:24:42.491164   32024 cli_runner.go:211] docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0429 07:24:42.491196   32024 oci.go:650] error shutdown multinode-548000: docker exec --privileged -t multinode-548000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:43.492638   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:43.544848   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:43.544889   32024 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:43.544907   32024 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:24:43.544945   32024 retry.go:31] will retry after 676.672761ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:44.223042   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:44.276105   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:44.276148   32024 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:44.276155   32024 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:24:44.276180   32024 retry.go:31] will retry after 670.159679ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:44.947756   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:44.999943   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:44.999989   32024 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:45.000003   32024 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:24:45.000023   32024 retry.go:31] will retry after 621.474784ms: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:45.623053   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:45.675750   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:45.675792   32024 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:45.675803   32024 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:24:45.675826   32024 retry.go:31] will retry after 2.211711855s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:47.889182   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:47.941620   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:47.941664   32024 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:47.941673   32024 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:24:47.941698   32024 retry.go:31] will retry after 3.300553241s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:51.242771   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:51.293835   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:51.293881   32024 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:51.293890   32024 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:24:51.293916   32024 retry.go:31] will retry after 3.465361809s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:54.761385   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:24:54.811871   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:24:54.811922   32024 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:24:54.811931   32024 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:24:54.811956   32024 retry.go:31] will retry after 5.520886892s: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:25:00.335224   32024 cli_runner.go:164] Run: docker container inspect multinode-548000 --format={{.State.Status}}
	W0429 07:25:00.386896   32024 cli_runner.go:211] docker container inspect multinode-548000 --format={{.State.Status}} returned with exit code 1
	I0429 07:25:00.386939   32024 oci.go:662] temporary error verifying shutdown: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	I0429 07:25:00.386946   32024 oci.go:664] temporary error: container multinode-548000 status is  but expect it to be exited
	I0429 07:25:00.386976   32024 oci.go:88] couldn't shut down multinode-548000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000
	 
	I0429 07:25:00.387052   32024 cli_runner.go:164] Run: docker rm -f -v multinode-548000
	I0429 07:25:00.435906   32024 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-548000
	W0429 07:25:00.483674   32024 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-548000 returned with exit code 1
	I0429 07:25:00.483780   32024 cli_runner.go:164] Run: docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:25:00.532768   32024 cli_runner.go:164] Run: docker network rm multinode-548000
	I0429 07:25:00.639917   32024 fix.go:124] Sleeping 1 second for extra luck!
	I0429 07:25:01.640182   32024 start.go:125] createHost starting for "" (driver="docker")
	I0429 07:25:01.661875   32024 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0429 07:25:01.662013   32024 start.go:159] libmachine.API.Create for "multinode-548000" (driver="docker")
	I0429 07:25:01.662048   32024 client.go:168] LocalClient.Create starting
	I0429 07:25:01.662182   32024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/ca.pem
	I0429 07:25:01.662248   32024 main.go:141] libmachine: Decoding PEM data...
	I0429 07:25:01.662271   32024 main.go:141] libmachine: Parsing certificate...
	I0429 07:25:01.662343   32024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18773-22625/.minikube/certs/cert.pem
	I0429 07:25:01.662404   32024 main.go:141] libmachine: Decoding PEM data...
	I0429 07:25:01.662415   32024 main.go:141] libmachine: Parsing certificate...
	I0429 07:25:01.683150   32024 cli_runner.go:164] Run: docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 07:25:01.733459   32024 cli_runner.go:211] docker network inspect multinode-548000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 07:25:01.733548   32024 network_create.go:281] running [docker network inspect multinode-548000] to gather additional debugging logs...
	I0429 07:25:01.733567   32024 cli_runner.go:164] Run: docker network inspect multinode-548000
	W0429 07:25:01.781400   32024 cli_runner.go:211] docker network inspect multinode-548000 returned with exit code 1
	I0429 07:25:01.781428   32024 network_create.go:284] error running [docker network inspect multinode-548000]: docker network inspect multinode-548000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-548000 not found
	I0429 07:25:01.781450   32024 network_create.go:286] output of [docker network inspect multinode-548000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-548000 not found
	
	** /stderr **
	I0429 07:25:01.781583   32024 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 07:25:01.832691   32024 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:25:01.834274   32024 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0429 07:25:01.834606   32024 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024051a0}
	I0429 07:25:01.834624   32024 network_create.go:124] attempt to create docker network multinode-548000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0429 07:25:01.834696   32024 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-548000 multinode-548000
	I0429 07:25:01.920100   32024 network_create.go:108] docker network multinode-548000 192.168.67.0/24 created
	I0429 07:25:01.920139   32024 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-548000" container
	I0429 07:25:01.920252   32024 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 07:25:01.970115   32024 cli_runner.go:164] Run: docker volume create multinode-548000 --label name.minikube.sigs.k8s.io=multinode-548000 --label created_by.minikube.sigs.k8s.io=true
	I0429 07:25:02.018654   32024 oci.go:103] Successfully created a docker volume multinode-548000
	I0429 07:25:02.018765   32024 cli_runner.go:164] Run: docker run --rm --name multinode-548000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-548000 --entrypoint /usr/bin/test -v multinode-548000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 07:25:02.264884   32024 oci.go:107] Successfully prepared a docker volume multinode-548000
	I0429 07:25:02.264922   32024 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 07:25:02.264935   32024 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 07:25:02.265041   32024 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-548000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-548000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-548000
helpers_test.go:235: (dbg) docker inspect multinode-548000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-548000",
	        "Id": "f2a83e55d8550dd116b58cdc6fbd0a037cdba36b0b6db3e5ce986f669e6a027e",
	        "Created": "2024-04-29T14:25:01.880585992Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-548000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-548000 -n multinode-548000: exit status 7 (116.035722ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:25:44.906478   32125 status.go:249] status error: host: state: unknown state "multinode-548000": docker container inspect multinode-548000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-548000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-548000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (63.59s)

                                                
                                    
x
+
TestScheduledStopUnix (300.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-191000 --memory=2048 --driver=docker 
E0429 07:29:06.884342   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:29:22.420566   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:30:30.020284   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-191000 --memory=2048 --driver=docker : signal: killed (5m0.004537145s)

                                                
                                                
-- stdout --
	* [scheduled-stop-191000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-191000" primary control-plane node in "scheduled-stop-191000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-191000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-191000" primary control-plane node in "scheduled-stop-191000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-29 07:32:44.847833 -0700 PDT m=+4795.866532966
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-191000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-191000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-191000",
	        "Id": "86011b76cc3503d57c4a9f4e1904b8ee40f6acf40516e144de7c4a227c9cea80",
	        "Created": "2024-04-29T14:27:45.914066991Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-191000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-191000 -n scheduled-stop-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-191000 -n scheduled-stop-191000: exit status 7 (113.956396ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:32:45.015083   32777 status.go:249] status error: host: state: unknown state "scheduled-stop-191000": docker container inspect scheduled-stop-191000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-191000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-191000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-191000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-191000
--- FAIL: TestScheduledStopUnix (300.89s)

                                                
                                    
x
+
TestSkaffold (300.95s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe2202047792 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe2202047792 version: (1.494101015s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-237000 --memory=2600 --driver=docker 
E0429 07:34:06.886836   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:34:22.422493   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 07:35:45.475251   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-237000 --memory=2600 --driver=docker : signal: killed (4m56.544049327s)

                                                
                                                
-- stdout --
	* [skaffold-237000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-237000" primary control-plane node in "skaffold-237000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-237000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-237000" primary control-plane node in "skaffold-237000" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-04-29 07:37:45.740324 -0700 PDT m=+5096.757090735
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-237000
helpers_test.go:235: (dbg) docker inspect skaffold-237000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-237000",
	        "Id": "bc9336bda167fdbc5d9213c424b034b9114a5a8c9a736ae45556a9945d949e31",
	        "Created": "2024-04-29T14:32:50.290233067Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-237000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-237000 -n skaffold-237000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-237000 -n skaffold-237000: exit status 7 (114.560907ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 07:37:45.909946   33145 status.go:249] status error: host: state: unknown state "skaffold-237000": docker container inspect skaffold-237000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-237000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-237000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-237000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-237000
--- FAIL: TestSkaffold (300.95s)

                                                
                                    
x
+
TestInsufficientStorage (300.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-588000 --memory=2048 --output=json --wait=true --driver=docker 
E0429 07:39:06.889497   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 07:39:22.424447   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-588000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.004150502s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a0399c4b-cad0-4d58-b965-c1701bc2b9a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-588000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"047050ae-21a2-4a81-b1a0-6e9c127a7ff5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18773"}}
	{"specversion":"1.0","id":"f513a743-763b-4a2e-b1fe-f3db7f1cd748","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig"}}
	{"specversion":"1.0","id":"8ae7b244-729f-407e-9a0f-dd2e1b3d0288","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"701a6261-73c7-49f2-9104-98ee3c5fed5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a79a82fa-4713-4556-bc76-1f31f80954cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube"}}
	{"specversion":"1.0","id":"fd025891-79d5-42c4-a612-6293eefd867e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0e57ec88-b30c-4223-8462-9ab5ae52a906","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"66e5e597-d59e-446c-ba29-443cc05a682b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9b028eaf-470e-4d05-ab83-0d0f03bd1c9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"af98da35-7ff0-4aa4-b282-b336d926cc98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"fc9e50c3-5e49-48a1-b1db-4019a0c1d272","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-588000\" primary control-plane node in \"insufficient-storage-588000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb0bb8b6-fc07-424d-b688-067f0af7d011","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713736339-18706 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"39baff51-fe87-4136-aa44-d09c154b5ead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-588000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-588000 --output=json --layout=cluster: context deadline exceeded (750ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-588000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-588000
--- FAIL: TestInsufficientStorage (300.73s)

                                                
                                    

Test pass (164/203)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.32
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.62
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.30.0/json-events 10.73
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.43
18 TestDownloadOnly/v1.30.0/DeleteAll 0.63
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.37
20 TestDownloadOnlyKic 1.89
21 TestBinaryMirror 1.67
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.15
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
27 TestAddons/Setup 332.61
31 TestAddons/parallel/InspektorGadget 10.76
32 TestAddons/parallel/MetricsServer 5.79
33 TestAddons/parallel/HelmTiller 9.83
35 TestAddons/parallel/CSI 51.02
36 TestAddons/parallel/Headlamp 13.18
37 TestAddons/parallel/CloudSpanner 5.65
38 TestAddons/parallel/LocalPath 52.87
39 TestAddons/parallel/NvidiaDevicePlugin 5.61
40 TestAddons/parallel/Yakd 5.01
43 TestAddons/serial/GCPAuth/Namespaces 0.1
44 TestAddons/StoppedEnableDisable 11.67
52 TestHyperKitDriverInstallOrUpdate 8.25
55 TestErrorSpam/setup 20.64
56 TestErrorSpam/start 2.09
57 TestErrorSpam/status 1.17
58 TestErrorSpam/pause 1.64
59 TestErrorSpam/unpause 1.7
60 TestErrorSpam/stop 11.39
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 35.38
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 34.08
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.06
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
72 TestFunctional/serial/CacheCmd/cache/add_local 1.63
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
74 TestFunctional/serial/CacheCmd/cache/list 0.09
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.88
77 TestFunctional/serial/CacheCmd/cache/delete 0.18
78 TestFunctional/serial/MinikubeKubectlCmd 1.02
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.46
80 TestFunctional/serial/ExtraConfig 41.96
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 3.16
83 TestFunctional/serial/LogsFileCmd 3.12
84 TestFunctional/serial/InvalidService 4.02
86 TestFunctional/parallel/ConfigCmd 0.53
87 TestFunctional/parallel/DashboardCmd 13.73
88 TestFunctional/parallel/DryRun 1.97
89 TestFunctional/parallel/InternationalLanguage 0.7
90 TestFunctional/parallel/StatusCmd 1.17
95 TestFunctional/parallel/AddonsCmd 0.28
96 TestFunctional/parallel/PersistentVolumeClaim 37.43
98 TestFunctional/parallel/SSHCmd 0.78
99 TestFunctional/parallel/CpCmd 2.91
100 TestFunctional/parallel/MySQL 30.78
101 TestFunctional/parallel/FileSync 0.44
102 TestFunctional/parallel/CertSync 2.51
106 TestFunctional/parallel/NodeLabels 0.05
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
110 TestFunctional/parallel/License 0.66
111 TestFunctional/parallel/Version/short 0.11
112 TestFunctional/parallel/Version/components 0.79
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
117 TestFunctional/parallel/ImageCommands/ImageBuild 2.66
118 TestFunctional/parallel/ImageCommands/Setup 2.52
119 TestFunctional/parallel/DockerEnv/bash 2.11
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.12
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.39
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.52
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.7
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.42
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.62
130 TestFunctional/parallel/ServiceCmd/DeployApp 15.16
131 TestFunctional/parallel/ServiceCmd/List 0.43
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
135 TestFunctional/parallel/ServiceCmd/HTTPS 15
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.17
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
145 TestFunctional/parallel/ServiceCmd/Format 15
146 TestFunctional/parallel/ServiceCmd/URL 15
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
148 TestFunctional/parallel/ProfileCmd/profile_list 0.53
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
150 TestFunctional/parallel/MountCmd/any-port 8.34
151 TestFunctional/parallel/MountCmd/specific-port 2.5
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.94
153 TestFunctional/delete_addon-resizer_images 0.12
154 TestFunctional/delete_my-image_image 0.05
155 TestFunctional/delete_minikube_cached_images 0.05
159 TestMultiControlPlane/serial/StartCluster 97.3
160 TestMultiControlPlane/serial/DeployApp 5.25
161 TestMultiControlPlane/serial/PingHostFromPods 1.39
162 TestMultiControlPlane/serial/AddWorkerNode 18.82
163 TestMultiControlPlane/serial/NodeLabels 0.05
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
165 TestMultiControlPlane/serial/CopyFile 23.66
166 TestMultiControlPlane/serial/StopSecondaryNode 11.9
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
168 TestMultiControlPlane/serial/RestartSecondaryNode 69.44
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.06
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 177.72
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.7
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
173 TestMultiControlPlane/serial/StopCluster 32.8
174 TestMultiControlPlane/serial/RestartCluster 69.28
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
176 TestMultiControlPlane/serial/AddSecondaryNode 35.57
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
180 TestImageBuild/serial/Setup 20.3
181 TestImageBuild/serial/NormalBuild 1.89
182 TestImageBuild/serial/BuildWithBuildArg 0.98
183 TestImageBuild/serial/BuildWithDockerIgnore 0.78
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.83
188 TestJSONOutput/start/Command 36.55
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.56
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.59
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.67
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.77
213 TestKicCustomNetwork/create_custom_network 21.91
214 TestKicCustomNetwork/use_default_bridge_network 22.64
215 TestKicExistingNetwork 23.12
216 TestKicCustomSubnet 21.63
217 TestKicStaticIP 22.38
218 TestMainNoArgs 0.09
219 TestMinikubeProfile 46.14
222 TestMountStart/serial/StartWithMountFirst 7.04
223 TestMountStart/serial/VerifyMountFirst 0.38
224 TestMountStart/serial/StartWithMountSecond 7.05
244 TestPreload 119.05
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 11.22
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.69
x
+
TestDownloadOnly/v1.20.0/json-events (27.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-576000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-576000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (27.318420606s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-576000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-576000: exit status 85 (294.705531ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-576000 | jenkins | v1.33.0 | 29 Apr 24 06:12 PDT |          |
	|         | -p download-only-576000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 06:12:48
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 06:12:48.906359   23096 out.go:291] Setting OutFile to fd 1 ...
	I0429 06:12:48.906644   23096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:12:48.906650   23096 out.go:304] Setting ErrFile to fd 2...
	I0429 06:12:48.906654   23096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:12:48.906823   23096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	W0429 06:12:48.906916   23096 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18773-22625/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18773-22625/.minikube/config/config.json: no such file or directory
	I0429 06:12:48.908629   23096 out.go:298] Setting JSON to true
	I0429 06:12:48.930622   23096 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":15142,"bootTime":1714381226,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 06:12:48.930723   23096 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 06:12:48.953009   23096 out.go:97] [download-only-576000] minikube v1.33.0 on Darwin 14.4.1
	I0429 06:12:48.974691   23096 out.go:169] MINIKUBE_LOCATION=18773
	I0429 06:12:48.953218   23096 notify.go:220] Checking for updates...
	W0429 06:12:48.953227   23096 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 06:12:49.017545   23096 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 06:12:49.038563   23096 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 06:12:49.059543   23096 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 06:12:49.080688   23096 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	W0429 06:12:49.122215   23096 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 06:12:49.122718   23096 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 06:12:49.180066   23096 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 06:12:49.180204   23096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 06:12:49.288537   23096 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:94 SystemTime:2024-04-29 13:12:49.277947982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 06:12:49.309699   23096 out.go:97] Using the docker driver based on user configuration
	I0429 06:12:49.309749   23096 start.go:297] selected driver: docker
	I0429 06:12:49.309772   23096 start.go:901] validating driver "docker" against <nil>
	I0429 06:12:49.309964   23096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 06:12:49.420974   23096 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:94 SystemTime:2024-04-29 13:12:49.410358419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 06:12:49.421152   23096 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 06:12:49.424012   23096 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0429 06:12:49.424149   23096 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 06:12:49.445586   23096 out.go:169] Using Docker Desktop driver with root privileges
	I0429 06:12:49.466559   23096 cni.go:84] Creating CNI manager for ""
	I0429 06:12:49.466602   23096 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 06:12:49.466731   23096 start.go:340] cluster config:
	{Name:download-only-576000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 06:12:49.488593   23096 out.go:97] Starting "download-only-576000" primary control-plane node in "download-only-576000" cluster
	I0429 06:12:49.488634   23096 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 06:12:49.509566   23096 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0429 06:12:49.509649   23096 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 06:12:49.509760   23096 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 06:12:49.559302   23096 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 06:12:49.559561   23096 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 06:12:49.559702   23096 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 06:12:49.566883   23096 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 06:12:49.566919   23096 cache.go:56] Caching tarball of preloaded images
	I0429 06:12:49.567144   23096 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 06:12:49.588331   23096 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 06:12:49.588362   23096 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 06:12:49.689337   23096 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 06:12:59.467965   23096 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 06:12:59.468169   23096 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 06:13:00.020998   23096 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 06:13:00.021233   23096 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/download-only-576000/config.json ...
	I0429 06:13:00.021256   23096 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/download-only-576000/config.json: {Name:mk79e1fb144f97bb18607352e777bd8e506abbea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 06:13:00.022262   23096 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 06:13:00.022758   23096 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-576000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-576000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-576000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (10.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker : (10.728084143s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (10.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-557000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-557000: exit status 85 (429.469137ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-576000 | jenkins | v1.33.0 | 29 Apr 24 06:12 PDT |                     |
	|         | -p download-only-576000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 06:13 PDT | 29 Apr 24 06:13 PDT |
	| delete  | -p download-only-576000        | download-only-576000 | jenkins | v1.33.0 | 29 Apr 24 06:13 PDT | 29 Apr 24 06:13 PDT |
	| start   | -o=json --download-only        | download-only-557000 | jenkins | v1.33.0 | 29 Apr 24 06:13 PDT |                     |
	|         | -p download-only-557000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 06:13:17
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 06:13:17.519000   23194 out.go:291] Setting OutFile to fd 1 ...
	I0429 06:13:17.519177   23194 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:13:17.519183   23194 out.go:304] Setting ErrFile to fd 2...
	I0429 06:13:17.519187   23194 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:13:17.519372   23194 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 06:13:17.520873   23194 out.go:298] Setting JSON to true
	I0429 06:13:17.542799   23194 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":15171,"bootTime":1714381226,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 06:13:17.542882   23194 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 06:13:17.564033   23194 out.go:97] [download-only-557000] minikube v1.33.0 on Darwin 14.4.1
	I0429 06:13:17.586034   23194 out.go:169] MINIKUBE_LOCATION=18773
	I0429 06:13:17.564260   23194 notify.go:220] Checking for updates...
	I0429 06:13:17.628674   23194 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 06:13:17.649804   23194 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 06:13:17.671056   23194 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 06:13:17.691975   23194 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	W0429 06:13:17.733895   23194 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 06:13:17.734504   23194 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 06:13:17.791515   23194 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 06:13:17.791666   23194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 06:13:17.899184   23194 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:94 SystemTime:2024-04-29 13:13:17.889278781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 06:13:17.921023   23194 out.go:97] Using the docker driver based on user configuration
	I0429 06:13:17.921123   23194 start.go:297] selected driver: docker
	I0429 06:13:17.921144   23194 start.go:901] validating driver "docker" against <nil>
	I0429 06:13:17.921332   23194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 06:13:18.028212   23194 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:94 SystemTime:2024-04-29 13:13:18.018417518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 06:13:18.028374   23194 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 06:13:18.031181   23194 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0429 06:13:18.031325   23194 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 06:13:18.052558   23194 out.go:169] Using Docker Desktop driver with root privileges
	I0429 06:13:18.074339   23194 cni.go:84] Creating CNI manager for ""
	I0429 06:13:18.074383   23194 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 06:13:18.074412   23194 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 06:13:18.074541   23194 start.go:340] cluster config:
	{Name:download-only-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-557000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 06:13:18.096237   23194 out.go:97] Starting "download-only-557000" primary control-plane node in "download-only-557000" cluster
	I0429 06:13:18.096280   23194 cache.go:121] Beginning downloading kic base image for docker with docker
	I0429 06:13:18.117319   23194 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0429 06:13:18.117489   23194 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 06:13:18.117490   23194 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 06:13:18.166665   23194 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 06:13:18.166893   23194 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 06:13:18.166917   23194 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0429 06:13:18.166923   23194 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0429 06:13:18.166931   23194 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0429 06:13:18.167901   23194 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 06:13:18.167916   23194 cache.go:56] Caching tarball of preloaded images
	I0429 06:13:18.168069   23194 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 06:13:18.189170   23194 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 06:13:18.189229   23194 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 06:13:18.272561   23194 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 06:13:23.642110   23194 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 06:13:23.642300   23194 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 06:13:24.137934   23194 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 06:13:24.138176   23194 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/download-only-557000/config.json ...
	I0429 06:13:24.138198   23194 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/download-only-557000/config.json: {Name:mk9eccfd00cf51745b3382b52871f38a72228500 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 06:13:24.138509   23194 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 06:13:24.138716   23194 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18773-22625/.minikube/cache/darwin/amd64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-557000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-557000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-557000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.89s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-029000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-029000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-029000
--- PASS: TestDownloadOnlyKic (1.89s)

                                                
                                    
x
+
TestBinaryMirror (1.67s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-152000 --alsologtostderr --binary-mirror http://127.0.0.1:50272 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-152000 --alsologtostderr --binary-mirror http://127.0.0.1:50272 --driver=docker : (1.076644347s)
helpers_test.go:175: Cleaning up "binary-mirror-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-152000
--- PASS: TestBinaryMirror (1.67s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.15s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-781000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-781000: exit status 85 (153.481109ms)

                                                
                                                
-- stdout --
	* Profile "addons-781000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-781000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.15s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-781000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-781000: exit status 85 (173.92332ms)

                                                
                                                
-- stdout --
	* Profile "addons-781000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-781000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (332.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-781000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-781000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m32.61321448s)
--- PASS: TestAddons/Setup (332.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8qvm8" [0fdbb76e-e56c-4f07-8b47-bb6f8ab891b9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003579763s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-781000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-781000: (5.756083902s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.583308ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-lldrg" [046fd37f-40d9-4015-9b39-f4c2f0602c04] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005219304s
addons_test.go:415: (dbg) Run:  kubectl --context addons-781000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-781000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.83s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.180901ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-vx9nn" [25dc2938-e91f-49ce-8883-ce0513f44069] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005152127s
addons_test.go:473: (dbg) Run:  kubectl --context addons-781000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-781000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.139555132s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-781000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 14.883758ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-781000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-781000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cb5eff37-e6cc-4fd5-a9a7-e455256dc850] Pending
helpers_test.go:344: "task-pv-pod" [cb5eff37-e6cc-4fd5-a9a7-e455256dc850] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cb5eff37-e6cc-4fd5-a9a7-e455256dc850] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004230869s
addons_test.go:584: (dbg) Run:  kubectl --context addons-781000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-781000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-781000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-781000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-781000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-781000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-781000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [60d6c037-3ed8-4973-85a2-e65efffd640d] Pending
helpers_test.go:344: "task-pv-pod-restore" [60d6c037-3ed8-4973-85a2-e65efffd640d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [60d6c037-3ed8-4973-85a2-e65efffd640d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005738359s
addons_test.go:626: (dbg) Run:  kubectl --context addons-781000 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-781000 delete pod task-pv-pod-restore: (1.008523834s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-781000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-781000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-781000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-781000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.811559826s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-781000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-781000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-781000 --alsologtostderr -v=1: (1.174346569s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-6gsqq" [ed402ec2-490f-4f22-8e61-d41e1478ce22] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-6gsqq" [ed402ec2-490f-4f22-8e61-d41e1478ce22] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004302968s
--- PASS: TestAddons/parallel/Headlamp (13.18s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-9jrhr" [6791f417-6fea-45dd-b799-d923630e40d1] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00447767s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-781000
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.87s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-781000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-781000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-781000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a118faa9-bedc-4184-95db-b06f876ead0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a118faa9-bedc-4184-95db-b06f876ead0c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a118faa9-bedc-4184-95db-b06f876ead0c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005382254s
addons_test.go:891: (dbg) Run:  kubectl --context addons-781000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-781000 ssh "cat /opt/local-path-provisioner/pvc-95e6b39b-e42c-45ff-a66d-3eea9bab7c2e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-781000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-781000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-781000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-781000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.994693291s)
--- PASS: TestAddons/parallel/LocalPath (52.87s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gspqc" [0cd764a4-6611-4aed-9b53-02f3ffc099f6] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005086255s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-781000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-tjctd" [87e729f8-6e94-478b-9dcb-fe5c7c76dce7] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006026437s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-781000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-781000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-781000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-781000: (10.940374029s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-781000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-781000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-781000
--- PASS: TestAddons/StoppedEnableDisable (11.67s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.25s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.25s)

                                                
                                    
x
+
TestErrorSpam/setup (20.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-703000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-703000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 --driver=docker : (20.643742488s)
--- PASS: TestErrorSpam/setup (20.64s)

                                                
                                    
x
+
TestErrorSpam/start (2.09s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 start --dry-run
--- PASS: TestErrorSpam/start (2.09s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (11.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 stop: (10.720525989s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-703000 stop
--- PASS: TestErrorSpam/stop (11.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18773-22625/.minikube/files/etc/test/nested/copy/23094/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-154000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (35.376012495s)
--- PASS: TestFunctional/serial/StartWithProxy (35.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-154000 --alsologtostderr -v=8: (34.07775924s)
functional_test.go:659: soft start took 34.078234068s for "functional-154000" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-154000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 cache add registry.k8s.io/pause:3.1: (1.16571103s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 cache add registry.k8s.io/pause:3.3: (1.217385323s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 cache add registry.k8s.io/pause:latest: (1.048415825s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1289408211/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 cache add minikube-local-cache-test:functional-154000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 cache add minikube-local-cache-test:functional-154000: (1.090578297s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 cache delete minikube-local-cache-test:functional-154000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-154000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (375.555712ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 kubectl -- --context functional-154000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 kubectl -- --context functional-154000 get pods: (1.023151814s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.02s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-154000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-154000 get pods: (1.463873346s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.46s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-154000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.961950521s)
functional_test.go:757: restart took 41.962095084s for "functional-154000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-154000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 logs
E0429 06:24:06.707879   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:24:06.788417   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:24:06.798680   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:24:06.820884   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:24:06.861000   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 logs: (3.158785966s)
--- PASS: TestFunctional/serial/LogsCmd (3.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd299576248/001/logs.txt
E0429 06:24:06.941133   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:24:07.106213   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:24:07.426518   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:24:08.068768   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:24:09.350690   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd299576248/001/logs.txt: (3.117284377s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-154000 apply -f testdata/invalidsvc.yaml
E0429 06:24:11.911068   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-154000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-154000: exit status 115 (540.589653ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30533 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-154000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 config get cpus: exit status 14 (65.487613ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 config get cpus: exit status 14 (65.56951ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-154000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-154000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25971: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-154000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (1.033800402s)

                                                
                                                
-- stdout --
	* [functional-154000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 06:25:46.247472   25856 out.go:291] Setting OutFile to fd 1 ...
	I0429 06:25:46.247805   25856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:25:46.247811   25856 out.go:304] Setting ErrFile to fd 2...
	I0429 06:25:46.247815   25856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:25:46.247987   25856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 06:25:46.249701   25856 out.go:298] Setting JSON to false
	I0429 06:25:46.275591   25856 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":15920,"bootTime":1714381226,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 06:25:46.275708   25856 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 06:25:46.296970   25856 out.go:177] * [functional-154000] minikube v1.33.0 on Darwin 14.4.1
	I0429 06:25:46.376072   25856 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 06:25:46.354199   25856 notify.go:220] Checking for updates...
	I0429 06:25:46.433712   25856 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 06:25:46.507455   25856 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 06:25:46.549677   25856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 06:25:46.591755   25856 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	I0429 06:25:46.633616   25856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 06:25:46.655479   25856 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 06:25:46.656274   25856 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 06:25:46.810802   25856 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 06:25:46.811023   25856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 06:25:47.022739   25856 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-29 13:25:46.946424562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 06:25:47.082792   25856 out.go:177] * Using the docker driver based on existing profile
	I0429 06:25:47.103794   25856 start.go:297] selected driver: docker
	I0429 06:25:47.103835   25856 start.go:901] validating driver "docker" against &{Name:functional-154000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-154000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 06:25:47.104031   25856 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 06:25:47.129492   25856 out.go:177] 
	W0429 06:25:47.150658   25856 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 06:25:47.171691   25856 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-154000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-154000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (702.075395ms)

                                                
                                                
-- stdout --
	* [functional-154000] minikube v1.33.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18773
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 06:25:48.208649   25930 out.go:291] Setting OutFile to fd 1 ...
	I0429 06:25:48.208822   25930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:25:48.208828   25930 out.go:304] Setting ErrFile to fd 2...
	I0429 06:25:48.208831   25930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:25:48.209039   25930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 06:25:48.210622   25930 out.go:298] Setting JSON to false
	I0429 06:25:48.233725   25930 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":15922,"bootTime":1714381226,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 06:25:48.233807   25930 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 06:25:48.255106   25930 out.go:177] * [functional-154000] minikube v1.33.0 sur Darwin 14.4.1
	I0429 06:25:48.318098   25930 out.go:177]   - MINIKUBE_LOCATION=18773
	I0429 06:25:48.297295   25930 notify.go:220] Checking for updates...
	I0429 06:25:48.338783   25930 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
	I0429 06:25:48.359070   25930 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 06:25:48.400872   25930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 06:25:48.443161   25930 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube
	I0429 06:25:48.463892   25930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 06:25:48.485461   25930 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 06:25:48.485872   25930 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 06:25:48.542077   25930 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0429 06:25:48.542257   25930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 06:25:48.662962   25930 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:105 SystemTime:2024-04-29 13:25:48.651526711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211092480 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0429 06:25:48.705224   25930 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0429 06:25:48.726688   25930 start.go:297] selected driver: docker
	I0429 06:25:48.726719   25930 start.go:901] validating driver "docker" against &{Name:functional-154000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-154000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 06:25:48.726830   25930 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 06:25:48.769252   25930 out.go:177] 
	W0429 06:25:48.790505   25930 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 06:25:48.811424   25930 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [87eece33-0416-41c8-b406-74f0965874ad] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003340904s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-154000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-154000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-154000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-154000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4930bf75-bc66-4564-ae21-4fc870ca5086] Pending
helpers_test.go:344: "sp-pod" [4930bf75-bc66-4564-ae21-4fc870ca5086] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4930bf75-bc66-4564-ae21-4fc870ca5086] Running
E0429 06:25:28.714192   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004508571s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-154000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-154000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-154000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c956cba-2458-48df-b9d1-1f0b8552eec9] Pending
helpers_test.go:344: "sp-pod" [6c956cba-2458-48df-b9d1-1f0b8552eec9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6c956cba-2458-48df-b9d1-1f0b8552eec9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004055822s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-154000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh -n functional-154000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 cp functional-154000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd183203420/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh -n functional-154000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh -n functional-154000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-154000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-5xr7k" [2d3c049b-59b0-40b3-9447-210cb59bd013] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-5xr7k" [2d3c049b-59b0-40b3-9447-210cb59bd013] Running
E0429 06:24:47.752115   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.003762576s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-154000 exec mysql-64454c8b5c-5xr7k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-154000 exec mysql-64454c8b5c-5xr7k -- mysql -ppassword -e "show databases;": exit status 1 (121.265899ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-154000 exec mysql-64454c8b5c-5xr7k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-154000 exec mysql-64454c8b5c-5xr7k -- mysql -ppassword -e "show databases;": exit status 1 (109.414866ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-154000 exec mysql-64454c8b5c-5xr7k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/23094/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo cat /etc/test/nested/copy/23094/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/23094.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo cat /etc/ssl/certs/23094.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/23094.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo cat /usr/share/ca-certificates/23094.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/230942.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo cat /etc/ssl/certs/230942.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/230942.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo cat /usr/share/ca-certificates/230942.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-154000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
E0429 06:24:17.031288   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 ssh "sudo systemctl is-active crio": exit status 1 (460.898972ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-154000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-154000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-154000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-154000 image ls --format short --alsologtostderr:
I0429 06:26:01.546972   26215 out.go:291] Setting OutFile to fd 1 ...
I0429 06:26:01.547755   26215 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:01.547764   26215 out.go:304] Setting ErrFile to fd 2...
I0429 06:26:01.547797   26215 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:01.548205   26215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
I0429 06:26:01.548924   26215 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:01.549016   26215 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:01.549395   26215 cli_runner.go:164] Run: docker container inspect functional-154000 --format={{.State.Status}}
I0429 06:26:01.598254   26215 ssh_runner.go:195] Run: systemctl --version
I0429 06:26:01.598330   26215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154000
I0429 06:26:01.646847   26215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51037 SSHKeyPath:/Users/jenkins/minikube-integration/18773-22625/.minikube/machines/functional-154000/id_rsa Username:docker}
I0429 06:26:01.731571   26215 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-154000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine            | f4215f6ee683f | 48.3MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/google-containers/addon-resizer      | functional-154000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-154000 | a21b82ac71cfd | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/library/nginx                     | latest            | 7383c266ef252 | 188MB  |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-154000 image ls --format table --alsologtostderr:
I0429 06:26:02.949138   26250 out.go:291] Setting OutFile to fd 1 ...
I0429 06:26:02.949433   26250 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:02.949439   26250 out.go:304] Setting ErrFile to fd 2...
I0429 06:26:02.949442   26250 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:02.949642   26250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
I0429 06:26:02.950247   26250 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:02.950338   26250 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:02.950789   26250 cli_runner.go:164] Run: docker container inspect functional-154000 --format={{.State.Status}}
I0429 06:26:03.001353   26250 ssh_runner.go:195] Run: systemctl --version
I0429 06:26:03.001428   26250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154000
I0429 06:26:03.050427   26250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51037 SSHKeyPath:/Users/jenkins/minikube-integration/18773-22625/.minikube/machines/functional-154000/id_rsa Username:docker}
I0429 06:26:03.132795   26250 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-154000 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a21b82ac71cfd59387119b2909b30fb9e73c3000c6673fd8eeb520fc2ec601b8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-154000"],"size":"30"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"3861cfcd7
c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-154000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499",
"repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"
188000000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-154000 image ls --format json --alsologtostderr:
I0429 06:26:02.645724   26244 out.go:291] Setting OutFile to fd 1 ...
I0429 06:26:02.645929   26244 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:02.645934   26244 out.go:304] Setting ErrFile to fd 2...
I0429 06:26:02.645937   26244 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:02.646509   26244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
I0429 06:26:02.647585   26244 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:02.647679   26244 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:02.648078   26244 cli_runner.go:164] Run: docker container inspect functional-154000 --format={{.State.Status}}
I0429 06:26:02.698593   26244 ssh_runner.go:195] Run: systemctl --version
I0429 06:26:02.698661   26244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154000
I0429 06:26:02.753533   26244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51037 SSHKeyPath:/Users/jenkins/minikube-integration/18773-22625/.minikube/machines/functional-154000/id_rsa Username:docker}
I0429 06:26:02.837197   26244 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-154000 image ls --format yaml --alsologtostderr:
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-154000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: a21b82ac71cfd59387119b2909b30fb9e73c3000c6673fd8eeb520fc2ec601b8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-154000
size: "30"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-154000 image ls --format yaml --alsologtostderr:
I0429 06:26:01.839178   26221 out.go:291] Setting OutFile to fd 1 ...
I0429 06:26:01.839476   26221 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:01.839482   26221 out.go:304] Setting ErrFile to fd 2...
I0429 06:26:01.839485   26221 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:01.839663   26221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
I0429 06:26:01.840287   26221 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:01.840379   26221 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:01.840790   26221 cli_runner.go:164] Run: docker container inspect functional-154000 --format={{.State.Status}}
I0429 06:26:01.889853   26221 ssh_runner.go:195] Run: systemctl --version
I0429 06:26:01.889924   26221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154000
I0429 06:26:01.938701   26221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51037 SSHKeyPath:/Users/jenkins/minikube-integration/18773-22625/.minikube/machines/functional-154000/id_rsa Username:docker}
I0429 06:26:02.023039   26221 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh pgrep buildkitd
2024/04/29 06:26:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 ssh pgrep buildkitd: exit status 1 (345.74705ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image build -t localhost/my-image:functional-154000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 image build -t localhost/my-image:functional-154000 testdata/build --alsologtostderr: (2.021683959s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-154000 image build -t localhost/my-image:functional-154000 testdata/build --alsologtostderr:
I0429 06:26:02.481195   26238 out.go:291] Setting OutFile to fd 1 ...
I0429 06:26:02.482124   26238 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:02.482132   26238 out.go:304] Setting ErrFile to fd 2...
I0429 06:26:02.482136   26238 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 06:26:02.482323   26238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
I0429 06:26:02.482962   26238 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:02.483649   26238 config.go:182] Loaded profile config "functional-154000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 06:26:02.484075   26238 cli_runner.go:164] Run: docker container inspect functional-154000 --format={{.State.Status}}
I0429 06:26:02.533554   26238 ssh_runner.go:195] Run: systemctl --version
I0429 06:26:02.533625   26238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154000
I0429 06:26:02.582319   26238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51037 SSHKeyPath:/Users/jenkins/minikube-integration/18773-22625/.minikube/machines/functional-154000/id_rsa Username:docker}
I0429 06:26:02.664264   26238 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3617104951.tar
I0429 06:26:02.664357   26238 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 06:26:02.673316   26238 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3617104951.tar
I0429 06:26:02.677623   26238 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3617104951.tar: stat -c "%s %y" /var/lib/minikube/build/build.3617104951.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3617104951.tar': No such file or directory
I0429 06:26:02.677655   26238 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3617104951.tar --> /var/lib/minikube/build/build.3617104951.tar (3072 bytes)
I0429 06:26:02.700391   26238 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3617104951
I0429 06:26:02.710303   26238 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3617104951 -xf /var/lib/minikube/build/build.3617104951.tar
I0429 06:26:02.721060   26238 docker.go:360] Building image: /var/lib/minikube/build/build.3617104951
I0429 06:26:02.721137   26238 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-154000 /var/lib/minikube/build/build.3617104951
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:49ab6fa304901c721c927b1db8e729d7dddf93af3b5e0aa8e1e9188fc1499fb0 done
#8 naming to localhost/my-image:functional-154000 done
#8 DONE 0.0s
I0429 06:26:04.394991   26238 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-154000 /var/lib/minikube/build/build.3617104951: (1.67382906s)
I0429 06:26:04.395062   26238 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3617104951
I0429 06:26:04.403560   26238 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3617104951.tar
I0429 06:26:04.411680   26238 build_images.go:217] Built localhost/my-image:functional-154000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3617104951.tar
I0429 06:26:04.411708   26238 build_images.go:133] succeeded building to: functional-154000
I0429 06:26:04.411713   26238 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.43259658s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-154000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-154000 docker-env) && out/minikube-darwin-amd64 status -p functional-154000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-154000 docker-env) && out/minikube-darwin-amd64 status -p functional-154000": (1.300567997s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-154000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image load --daemon gcr.io/google-containers/addon-resizer:functional-154000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 image load --daemon gcr.io/google-containers/addon-resizer:functional-154000 --alsologtostderr: (3.805608277s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image load --daemon gcr.io/google-containers/addon-resizer:functional-154000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 image load --daemon gcr.io/google-containers/addon-resizer:functional-154000 --alsologtostderr: (2.049806223s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.255893171s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-154000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image load --daemon gcr.io/google-containers/addon-resizer:functional-154000 --alsologtostderr
E0429 06:24:27.271650   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 image load --daemon gcr.io/google-containers/addon-resizer:functional-154000 --alsologtostderr: (3.878729431s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image save gcr.io/google-containers/addon-resizer:functional-154000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 image save gcr.io/google-containers/addon-resizer:functional-154000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.696512791s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image rm gcr.io/google-containers/addon-resizer:functional-154000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.099914031s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-154000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 image save --daemon gcr.io/google-containers/addon-resizer:functional-154000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-154000 image save --daemon gcr.io/google-containers/addon-resizer:functional-154000 --alsologtostderr: (1.484013065s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-154000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-154000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-154000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-xsh9c" [f0aba1c9-b50a-4f00-a2eb-dd487cdfef24] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-xsh9c" [f0aba1c9-b50a-4f00-a2eb-dd487cdfef24] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.005035968s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 service list -o json
functional_test.go:1490: Took "448.88345ms" to run "out/minikube-darwin-amd64 -p functional-154000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-154000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-154000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-154000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-154000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 25633: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 service --namespace=default --https --url hello-node: signal: killed (15.003593395s)

                                                
                                                
-- stdout --
	https://127.0.0.1:51291

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:51291
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-154000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-154000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [dd82b7ad-2892-454d-876e-2227d7d9effa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [dd82b7ad-2892-454d-876e-2227d7d9effa] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003423741s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-154000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-154000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 25683: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 service hello-node --url --format={{.IP}}: signal: killed (15.003524705s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 service hello-node --url: signal: killed (15.004579924s)

                                                
                                                
-- stdout --
	http://127.0.0.1:51355

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:51355
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "442.673476ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "87.826504ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "443.752742ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "86.665835ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2249638291/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714397145910117000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2249638291/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714397145910117000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2249638291/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714397145910117000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2249638291/001/test-1714397145910117000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (443.419478ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 29 13:25 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 29 13:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 29 13:25 test-1714397145910117000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh cat /mount-9p/test-1714397145910117000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-154000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [74d24aff-1058-444a-b7c9-ae9c45b5011f] Pending
helpers_test.go:344: "busybox-mount" [74d24aff-1058-444a-b7c9-ae9c45b5011f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [74d24aff-1058-444a-b7c9-ae9c45b5011f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [74d24aff-1058-444a-b7c9-ae9c45b5011f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006168626s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-154000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2249638291/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2225223711/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (462.989526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2225223711/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 ssh "sudo umount -f /mount-9p": exit status 1 (394.683197ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-154000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2225223711/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054805487/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054805487/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054805487/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T" /mount1: exit status 1 (623.387967ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-154000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-154000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054805487/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054805487/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-154000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3054805487/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.94s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-154000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-154000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-154000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (97.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-926000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0429 06:26:50.634776   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-926000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m36.239335322s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr: (1.061487409s)
--- PASS: TestMultiControlPlane/serial/StartCluster (97.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-926000 -- rollout status deployment/busybox: (2.762000454s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-hzxjn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-k9jsb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-p9mlx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-hzxjn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-k9jsb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-p9mlx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-hzxjn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-k9jsb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-p9mlx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-hzxjn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-hzxjn -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-k9jsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-k9jsb -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-p9mlx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-926000 -- exec busybox-fc5497c4f-p9mlx -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (18.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-926000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-926000 -v=7 --alsologtostderr: (17.525113867s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr: (1.298965799s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (18.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-926000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.095127712s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (23.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-926000 status --output json -v=7 --alsologtostderr: (1.293657491s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp testdata/cp-test.txt ha-926000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1523067058/001/cp-test_ha-926000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000:/home/docker/cp-test.txt ha-926000-m02:/home/docker/cp-test_ha-926000_ha-926000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m02 "sudo cat /home/docker/cp-test_ha-926000_ha-926000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000:/home/docker/cp-test.txt ha-926000-m03:/home/docker/cp-test_ha-926000_ha-926000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m03 "sudo cat /home/docker/cp-test_ha-926000_ha-926000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000:/home/docker/cp-test.txt ha-926000-m04:/home/docker/cp-test_ha-926000_ha-926000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m04 "sudo cat /home/docker/cp-test_ha-926000_ha-926000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp testdata/cp-test.txt ha-926000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1523067058/001/cp-test_ha-926000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m02:/home/docker/cp-test.txt ha-926000:/home/docker/cp-test_ha-926000-m02_ha-926000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000 "sudo cat /home/docker/cp-test_ha-926000-m02_ha-926000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m02:/home/docker/cp-test.txt ha-926000-m03:/home/docker/cp-test_ha-926000-m02_ha-926000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m03 "sudo cat /home/docker/cp-test_ha-926000-m02_ha-926000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m02:/home/docker/cp-test.txt ha-926000-m04:/home/docker/cp-test_ha-926000-m02_ha-926000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m04 "sudo cat /home/docker/cp-test_ha-926000-m02_ha-926000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp testdata/cp-test.txt ha-926000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1523067058/001/cp-test_ha-926000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m03:/home/docker/cp-test.txt ha-926000:/home/docker/cp-test_ha-926000-m03_ha-926000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000 "sudo cat /home/docker/cp-test_ha-926000-m03_ha-926000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m03:/home/docker/cp-test.txt ha-926000-m02:/home/docker/cp-test_ha-926000-m03_ha-926000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m02 "sudo cat /home/docker/cp-test_ha-926000-m03_ha-926000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m03:/home/docker/cp-test.txt ha-926000-m04:/home/docker/cp-test_ha-926000-m03_ha-926000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m04 "sudo cat /home/docker/cp-test_ha-926000-m03_ha-926000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp testdata/cp-test.txt ha-926000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1523067058/001/cp-test_ha-926000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m04:/home/docker/cp-test.txt ha-926000:/home/docker/cp-test_ha-926000-m04_ha-926000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000 "sudo cat /home/docker/cp-test_ha-926000-m04_ha-926000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m04:/home/docker/cp-test.txt ha-926000-m02:/home/docker/cp-test_ha-926000-m04_ha-926000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m02 "sudo cat /home/docker/cp-test_ha-926000-m04_ha-926000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 cp ha-926000-m04:/home/docker/cp-test.txt ha-926000-m03:/home/docker/cp-test_ha-926000-m04_ha-926000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 ssh -n ha-926000-m03 "sudo cat /home/docker/cp-test_ha-926000-m04_ha-926000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (23.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-926000 node stop m02 -v=7 --alsologtostderr: (10.88790316s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr: exit status 7 (1.015693149s)

                                                
                                                
-- stdout --
	ha-926000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-926000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-926000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-926000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 06:28:46.660122   27531 out.go:291] Setting OutFile to fd 1 ...
	I0429 06:28:46.660354   27531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:28:46.660360   27531 out.go:304] Setting ErrFile to fd 2...
	I0429 06:28:46.660364   27531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:28:46.660556   27531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 06:28:46.660763   27531 out.go:298] Setting JSON to false
	I0429 06:28:46.660785   27531 mustload.go:65] Loading cluster: ha-926000
	I0429 06:28:46.660819   27531 notify.go:220] Checking for updates...
	I0429 06:28:46.661163   27531 config.go:182] Loaded profile config "ha-926000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 06:28:46.661177   27531 status.go:255] checking status of ha-926000 ...
	I0429 06:28:46.662539   27531 cli_runner.go:164] Run: docker container inspect ha-926000 --format={{.State.Status}}
	I0429 06:28:46.714240   27531 status.go:330] ha-926000 host status = "Running" (err=<nil>)
	I0429 06:28:46.714275   27531 host.go:66] Checking if "ha-926000" exists ...
	I0429 06:28:46.714536   27531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-926000
	I0429 06:28:46.763909   27531 host.go:66] Checking if "ha-926000" exists ...
	I0429 06:28:46.764225   27531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 06:28:46.764297   27531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-926000
	I0429 06:28:46.815192   27531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51496 SSHKeyPath:/Users/jenkins/minikube-integration/18773-22625/.minikube/machines/ha-926000/id_rsa Username:docker}
	I0429 06:28:46.899975   27531 ssh_runner.go:195] Run: systemctl --version
	I0429 06:28:46.904190   27531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 06:28:46.914304   27531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-926000
	I0429 06:28:46.965249   27531 kubeconfig.go:125] found "ha-926000" server: "https://127.0.0.1:51500"
	I0429 06:28:46.965281   27531 api_server.go:166] Checking apiserver status ...
	I0429 06:28:46.965321   27531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 06:28:46.975826   27531 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup
	W0429 06:28:46.984847   27531 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 06:28:46.984904   27531 ssh_runner.go:195] Run: ls
	I0429 06:28:46.988737   27531 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51500/healthz ...
	I0429 06:28:46.993416   27531 api_server.go:279] https://127.0.0.1:51500/healthz returned 200:
	ok
	I0429 06:28:46.993435   27531 status.go:422] ha-926000 apiserver status = Running (err=<nil>)
	I0429 06:28:46.993446   27531 status.go:257] ha-926000 status: &{Name:ha-926000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 06:28:46.993457   27531 status.go:255] checking status of ha-926000-m02 ...
	I0429 06:28:46.993718   27531 cli_runner.go:164] Run: docker container inspect ha-926000-m02 --format={{.State.Status}}
	I0429 06:28:47.043545   27531 status.go:330] ha-926000-m02 host status = "Stopped" (err=<nil>)
	I0429 06:28:47.043579   27531 status.go:343] host is not running, skipping remaining checks
	I0429 06:28:47.043591   27531 status.go:257] ha-926000-m02 status: &{Name:ha-926000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 06:28:47.043613   27531 status.go:255] checking status of ha-926000-m03 ...
	I0429 06:28:47.043914   27531 cli_runner.go:164] Run: docker container inspect ha-926000-m03 --format={{.State.Status}}
	I0429 06:28:47.092643   27531 status.go:330] ha-926000-m03 host status = "Running" (err=<nil>)
	I0429 06:28:47.092683   27531 host.go:66] Checking if "ha-926000-m03" exists ...
	I0429 06:28:47.092942   27531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-926000-m03
	I0429 06:28:47.143253   27531 host.go:66] Checking if "ha-926000-m03" exists ...
	I0429 06:28:47.143540   27531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 06:28:47.143593   27531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-926000-m03
	I0429 06:28:47.192790   27531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51601 SSHKeyPath:/Users/jenkins/minikube-integration/18773-22625/.minikube/machines/ha-926000-m03/id_rsa Username:docker}
	I0429 06:28:47.276692   27531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 06:28:47.287407   27531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-926000
	I0429 06:28:47.337568   27531 kubeconfig.go:125] found "ha-926000" server: "https://127.0.0.1:51500"
	I0429 06:28:47.337592   27531 api_server.go:166] Checking apiserver status ...
	I0429 06:28:47.337628   27531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 06:28:47.347972   27531 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2074/cgroup
	W0429 06:28:47.356698   27531 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2074/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 06:28:47.356754   27531 ssh_runner.go:195] Run: ls
	I0429 06:28:47.360483   27531 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51500/healthz ...
	I0429 06:28:47.365410   27531 api_server.go:279] https://127.0.0.1:51500/healthz returned 200:
	ok
	I0429 06:28:47.365424   27531 status.go:422] ha-926000-m03 apiserver status = Running (err=<nil>)
	I0429 06:28:47.365434   27531 status.go:257] ha-926000-m03 status: &{Name:ha-926000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 06:28:47.365446   27531 status.go:255] checking status of ha-926000-m04 ...
	I0429 06:28:47.365700   27531 cli_runner.go:164] Run: docker container inspect ha-926000-m04 --format={{.State.Status}}
	I0429 06:28:47.414464   27531 status.go:330] ha-926000-m04 host status = "Running" (err=<nil>)
	I0429 06:28:47.414489   27531 host.go:66] Checking if "ha-926000-m04" exists ...
	I0429 06:28:47.414746   27531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-926000-m04
	I0429 06:28:47.463597   27531 host.go:66] Checking if "ha-926000-m04" exists ...
	I0429 06:28:47.463858   27531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 06:28:47.463908   27531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-926000-m04
	I0429 06:28:47.512711   27531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51724 SSHKeyPath:/Users/jenkins/minikube-integration/18773-22625/.minikube/machines/ha-926000-m04/id_rsa Username:docker}
	I0429 06:28:47.596760   27531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 06:28:47.607031   27531 status.go:257] ha-926000-m04 status: &{Name:ha-926000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (69.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 node start m02 -v=7 --alsologtostderr
E0429 06:29:06.708060   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:29:22.244248   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:22.249409   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:22.260936   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:22.281070   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:22.321865   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:22.402285   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:22.562885   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:22.884656   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:23.525240   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:24.805378   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:27.365849   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:32.486205   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:29:34.475928   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:29:42.727128   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-926000 node start m02 -v=7 --alsologtostderr: (1m8.089928353s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr: (1.297882815s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (69.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.061908164s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-926000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-926000 -v=7 --alsologtostderr
E0429 06:30:03.207451   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-926000 -v=7 --alsologtostderr: (34.227623632s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-926000 --wait=true -v=7 --alsologtostderr
E0429 06:30:44.168461   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:32:06.088810   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-926000 --wait=true -v=7 --alsologtostderr: (2m23.358039677s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-926000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-926000 node delete m03 -v=7 --alsologtostderr: (10.607345714s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-926000 stop -v=7 --alsologtostderr: (32.589845077s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr: exit status 7 (212.706347ms)

                                                
                                                
-- stdout --
	ha-926000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-926000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-926000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 06:33:41.738685   28281 out.go:291] Setting OutFile to fd 1 ...
	I0429 06:33:41.738990   28281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:33:41.738997   28281 out.go:304] Setting ErrFile to fd 2...
	I0429 06:33:41.739001   28281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 06:33:41.739176   28281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18773-22625/.minikube/bin
	I0429 06:33:41.739373   28281 out.go:298] Setting JSON to false
	I0429 06:33:41.739395   28281 mustload.go:65] Loading cluster: ha-926000
	I0429 06:33:41.739438   28281 notify.go:220] Checking for updates...
	I0429 06:33:41.739738   28281 config.go:182] Loaded profile config "ha-926000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 06:33:41.739752   28281 status.go:255] checking status of ha-926000 ...
	I0429 06:33:41.740152   28281 cli_runner.go:164] Run: docker container inspect ha-926000 --format={{.State.Status}}
	I0429 06:33:41.789462   28281 status.go:330] ha-926000 host status = "Stopped" (err=<nil>)
	I0429 06:33:41.789484   28281 status.go:343] host is not running, skipping remaining checks
	I0429 06:33:41.789491   28281 status.go:257] ha-926000 status: &{Name:ha-926000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 06:33:41.789511   28281 status.go:255] checking status of ha-926000-m02 ...
	I0429 06:33:41.789762   28281 cli_runner.go:164] Run: docker container inspect ha-926000-m02 --format={{.State.Status}}
	I0429 06:33:41.837829   28281 status.go:330] ha-926000-m02 host status = "Stopped" (err=<nil>)
	I0429 06:33:41.837865   28281 status.go:343] host is not running, skipping remaining checks
	I0429 06:33:41.837874   28281 status.go:257] ha-926000-m02 status: &{Name:ha-926000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 06:33:41.837894   28281 status.go:255] checking status of ha-926000-m04 ...
	I0429 06:33:41.838194   28281 cli_runner.go:164] Run: docker container inspect ha-926000-m04 --format={{.State.Status}}
	I0429 06:33:41.886352   28281 status.go:330] ha-926000-m04 host status = "Stopped" (err=<nil>)
	I0429 06:33:41.886386   28281 status.go:343] host is not running, skipping remaining checks
	I0429 06:33:41.886396   28281 status.go:257] ha-926000-m04 status: &{Name:ha-926000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (69.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-926000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0429 06:34:06.708612   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
E0429 06:34:22.243921   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
E0429 06:34:49.929083   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-926000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m8.174111732s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (69.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-926000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-926000 --control-plane -v=7 --alsologtostderr: (34.274894426s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-926000 status -v=7 --alsologtostderr: (1.297569927s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.087351987s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-642000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-642000 --driver=docker : (20.30423157s)
--- PASS: TestImageBuild/serial/Setup (20.30s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-642000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-642000: (1.88683822s)
--- PASS: TestImageBuild/serial/NormalBuild (1.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-642000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-642000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-642000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-299000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-299000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (36.546427097s)
--- PASS: TestJSONOutput/start/Command (36.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-299000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-299000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.67s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-299000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-299000 --output=json --user=testUser: (10.665931751s)
--- PASS: TestJSONOutput/stop/Command (10.67s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-525000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-525000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (388.011328ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b04293ad-795b-46ec-b49f-d34201d79caf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-525000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"025b4633-7787-4572-bb00-59bcf2ecbd56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18773"}}
	{"specversion":"1.0","id":"9bd26b8b-7a67-4570-a2a2-f06191194524","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig"}}
	{"specversion":"1.0","id":"e251e429-3745-44de-b30e-d972940b7c2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"8ea8c1ed-4b81-4045-b167-261cb3592925","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ffe1e42c-3b8c-41d7-aca4-ed6436365004","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18773-22625/.minikube"}}
	{"specversion":"1.0","id":"3605379d-641b-474f-9e2f-3a120cbdce06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f5e73d2c-efbc-47ed-88c1-0311d826255c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-525000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-525000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (21.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-236000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-236000 --network=: (19.512099218s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-236000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-236000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-236000: (2.346153175s)
--- PASS: TestKicCustomNetwork/create_custom_network (21.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-469000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-469000 --network=bridge: (20.374982655s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-469000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-469000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-469000: (2.216341758s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.64s)

                                                
                                    
x
+
TestKicExistingNetwork (23.12s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-607000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-607000 --network=existing-network: (20.451480215s)
helpers_test.go:175: Cleaning up "existing-network-607000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-607000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-607000: (2.286332516s)
--- PASS: TestKicExistingNetwork (23.12s)

                                                
                                    
x
+
TestKicCustomSubnet (21.63s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-866000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-866000 --subnet=192.168.60.0/24: (19.239182472s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-866000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-866000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-866000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-866000: (2.343667806s)
--- PASS: TestKicCustomSubnet (21.63s)

                                                
                                    
x
+
TestKicStaticIP (22.38s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-174000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-174000 --static-ip=192.168.200.200: (19.7837848s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-174000 ip
helpers_test.go:175: Cleaning up "static-ip-174000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-174000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-174000: (2.361380942s)
--- PASS: TestKicStaticIP (22.38s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (46.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-577000 --driver=docker 
E0429 06:39:06.709323   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/addons-781000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-577000 --driver=docker : (19.394620538s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-579000 --driver=docker 
E0429 06:39:22.244593   23094 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18773-22625/.minikube/profiles/functional-154000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-579000 --driver=docker : (20.013501174s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-577000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-579000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-579000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-579000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-579000: (2.377815479s)
helpers_test.go:175: Cleaning up "first-577000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-577000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-577000: (2.454293121s)
--- PASS: TestMinikubeProfile (46.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-780000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-780000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.035978622s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-780000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-791000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-791000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.0484972s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.05s)

                                                
                                    
x
+
TestPreload (119.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-517000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-517000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m12.64028556s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-517000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-517000 image pull gcr.io/k8s-minikube/busybox: (1.497228006s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-517000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-517000: (10.833946474s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-517000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-517000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (31.368902713s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-517000 image list
helpers_test.go:175: Cleaning up "test-preload-517000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-517000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-517000: (2.417359782s)
--- PASS: TestPreload (119.05s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (11.22s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18773
- KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1983581932/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1983581932/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1983581932/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1983581932/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (11.22s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.69s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18773
- KUBECONFIG=/Users/jenkins/minikube-integration/18773-22625/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2716217102/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2716217102/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2716217102/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2716217102/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.69s)

                                                
                                    

Test skip (17/203)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 12.407674ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ls6dg" [e5657e81-3632-4115-a696-9c6fae94ec03] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005464646s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-m478g" [35e24e30-9ea7-4d5b-ad36-8873e9a12ee6] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.007477673s
addons_test.go:340: (dbg) Run:  kubectl --context addons-781000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-781000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-781000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.69570647s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.77s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-781000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-781000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-781000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [66cfcbd4-2c54-44ae-b1de-e799f1a5dd69] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [66cfcbd4-2c54-44ae-b1de-e799f1a5dd69] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004681756s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-781000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.89s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-154000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-154000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-6gjnm" [f4923e34-9814-42f1-88ad-e7d5a41481cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-6gjnm" [f4923e34-9814-42f1-88ad-e7d5a41481cf] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003531729s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard