Test Report: Docker_macOS 18453

                    
                      9277aac12dad2c88a60ac507f67489f1590ebf0d:2024-03-19:33652
                    
                

Test fail (22/209)

x
+
TestOffline (751.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-947000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-947000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m30.321821863s)

                                                
                                                
-- stdout --
	* [offline-docker-947000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-947000" primary control-plane node in "offline-docker-947000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-947000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:36:20.641289   11323 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:36:20.641554   11323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:36:20.641560   11323 out.go:304] Setting ErrFile to fd 2...
	I0319 13:36:20.641564   11323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:36:20.641749   11323 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:36:20.643389   11323 out.go:298] Setting JSON to false
	I0319 13:36:20.666841   11323 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5751,"bootTime":1710874829,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 13:36:20.666943   11323 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 13:36:20.689294   11323 out.go:177] * [offline-docker-947000] minikube v1.32.0 on Darwin 14.3.1
	I0319 13:36:20.737383   11323 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 13:36:20.737437   11323 notify.go:220] Checking for updates...
	I0319 13:36:20.779563   11323 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 13:36:20.800426   11323 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 13:36:20.821447   11323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 13:36:20.863306   11323 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	I0319 13:36:20.884468   11323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 13:36:20.905712   11323 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 13:36:20.961739   11323 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 13:36:20.961898   11323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:36:21.121649   11323 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:false NGoroutines:161 SystemTime:2024-03-19 20:36:21.076869301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:36:21.164604   11323 out.go:177] * Using the docker driver based on user configuration
	I0319 13:36:21.185550   11323 start.go:297] selected driver: docker
	I0319 13:36:21.185562   11323 start.go:901] validating driver "docker" against <nil>
	I0319 13:36:21.185570   11323 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 13:36:21.188805   11323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:36:21.293807   11323 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:false NGoroutines:161 SystemTime:2024-03-19 20:36:21.28174005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:36:21.293978   11323 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 13:36:21.294222   11323 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 13:36:21.315799   11323 out.go:177] * Using Docker Desktop driver with root privileges
	I0319 13:36:21.336956   11323 cni.go:84] Creating CNI manager for ""
	I0319 13:36:21.337008   11323 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0319 13:36:21.337067   11323 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 13:36:21.337233   11323 start.go:340] cluster config:
	{Name:offline-docker-947000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-947000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 13:36:21.359029   11323 out.go:177] * Starting "offline-docker-947000" primary control-plane node in "offline-docker-947000" cluster
	I0319 13:36:21.403179   11323 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 13:36:21.446905   11323 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0319 13:36:21.511110   11323 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:36:21.511204   11323 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0319 13:36:21.511195   11323 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 13:36:21.511226   11323 cache.go:56] Caching tarball of preloaded images
	I0319 13:36:21.511448   11323 preload.go:173] Found /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0319 13:36:21.511468   11323 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0319 13:36:21.513237   11323 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/offline-docker-947000/config.json ...
	I0319 13:36:21.513821   11323 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/offline-docker-947000/config.json: {Name:mk01dd9e94459aced382aba3f63933658318ffca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 13:36:21.563243   11323 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0319 13:36:21.563262   11323 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0319 13:36:21.563282   11323 cache.go:194] Successfully downloaded all kic artifacts
	I0319 13:36:21.563324   11323 start.go:360] acquireMachinesLock for offline-docker-947000: {Name:mk30d5c140a33a42f1a5c47137012996e051573d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:36:21.563474   11323 start.go:364] duration metric: took 137.536µs to acquireMachinesLock for "offline-docker-947000"
	I0319 13:36:21.563500   11323 start.go:93] Provisioning new machine with config: &{Name:offline-docker-947000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-947000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0319 13:36:21.563616   11323 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:36:21.605969   11323 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0319 13:36:21.606174   11323 start.go:159] libmachine.API.Create for "offline-docker-947000" (driver="docker")
	I0319 13:36:21.606198   11323 client.go:168] LocalClient.Create starting
	I0319 13:36:21.606331   11323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:36:21.606390   11323 main.go:141] libmachine: Decoding PEM data...
	I0319 13:36:21.606408   11323 main.go:141] libmachine: Parsing certificate...
	I0319 13:36:21.606488   11323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:36:21.606523   11323 main.go:141] libmachine: Decoding PEM data...
	I0319 13:36:21.606531   11323 main.go:141] libmachine: Parsing certificate...
	I0319 13:36:21.607063   11323 cli_runner.go:164] Run: docker network inspect offline-docker-947000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:36:21.722918   11323 cli_runner.go:211] docker network inspect offline-docker-947000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:36:21.723073   11323 network_create.go:281] running [docker network inspect offline-docker-947000] to gather additional debugging logs...
	I0319 13:36:21.723107   11323 cli_runner.go:164] Run: docker network inspect offline-docker-947000
	W0319 13:36:21.775434   11323 cli_runner.go:211] docker network inspect offline-docker-947000 returned with exit code 1
	I0319 13:36:21.775462   11323 network_create.go:284] error running [docker network inspect offline-docker-947000]: docker network inspect offline-docker-947000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-947000 not found
	I0319 13:36:21.775480   11323 network_create.go:286] output of [docker network inspect offline-docker-947000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-947000 not found
	
	** /stderr **
	I0319 13:36:21.775625   11323 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:36:21.918871   11323 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:36:21.920519   11323 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:36:21.920887   11323 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022b00f0}
	I0319 13:36:21.920916   11323 network_create.go:124] attempt to create docker network offline-docker-947000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0319 13:36:21.920998   11323 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-947000 offline-docker-947000
	W0319 13:36:21.971137   11323 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-947000 offline-docker-947000 returned with exit code 1
	W0319 13:36:21.971186   11323 network_create.go:149] failed to create docker network offline-docker-947000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-947000 offline-docker-947000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0319 13:36:21.971205   11323 network_create.go:116] failed to create docker network offline-docker-947000 192.168.67.0/24, will retry: subnet is taken
	I0319 13:36:21.972591   11323 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:36:21.972993   11323 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021ab240}
	I0319 13:36:21.973005   11323 network_create.go:124] attempt to create docker network offline-docker-947000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0319 13:36:21.973074   11323 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-947000 offline-docker-947000
	I0319 13:36:22.061671   11323 network_create.go:108] docker network offline-docker-947000 192.168.76.0/24 created
	I0319 13:36:22.061720   11323 kic.go:121] calculated static IP "192.168.76.2" for the "offline-docker-947000" container
	I0319 13:36:22.061823   11323 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:36:22.113507   11323 cli_runner.go:164] Run: docker volume create offline-docker-947000 --label name.minikube.sigs.k8s.io=offline-docker-947000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:36:22.165733   11323 oci.go:103] Successfully created a docker volume offline-docker-947000
	I0319 13:36:22.165866   11323 cli_runner.go:164] Run: docker run --rm --name offline-docker-947000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-947000 --entrypoint /usr/bin/test -v offline-docker-947000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:36:22.707352   11323 oci.go:107] Successfully prepared a docker volume offline-docker-947000
	I0319 13:36:22.707403   11323 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:36:22.707416   11323 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:36:22.707523   11323 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-947000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 13:42:21.617812   11323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:42:21.617949   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:42:21.671561   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:42:21.671686   11323 retry.go:31] will retry after 270.203902ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:21.944254   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:42:21.996785   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:42:21.996899   11323 retry.go:31] will retry after 498.423383ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:22.495652   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:42:22.548923   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:42:22.549033   11323 retry.go:31] will retry after 787.522748ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:23.338948   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:42:23.391358   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	W0319 13:42:23.391463   11323 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	
	W0319 13:42:23.391492   11323 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:23.391544   11323 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:42:23.391601   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:42:23.441207   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:42:23.441304   11323 retry.go:31] will retry after 329.703561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:23.773297   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:42:23.824172   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:42:23.824271   11323 retry.go:31] will retry after 451.51724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:24.277165   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:42:24.327493   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:42:24.327585   11323 retry.go:31] will retry after 551.729252ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:24.879613   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:42:24.932443   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	W0319 13:42:24.932564   11323 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	
	W0319 13:42:24.932580   11323 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:24.932588   11323 start.go:128] duration metric: took 6m3.358955161s to createHost
	I0319 13:42:24.932599   11323 start.go:83] releasing machines lock for "offline-docker-947000", held for 6m3.359113007s
	W0319 13:42:24.932613   11323 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0319 13:42:24.933020   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:24.983033   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:24.983086   11323 delete.go:82] Unable to get host status for offline-docker-947000, assuming it has already been deleted: state: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	W0319 13:42:24.983163   11323 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0319 13:42:24.983176   11323 start.go:728] Will try again in 5 seconds ...
	I0319 13:42:29.985354   11323 start.go:360] acquireMachinesLock for offline-docker-947000: {Name:mk30d5c140a33a42f1a5c47137012996e051573d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:42:29.986517   11323 start.go:364] duration metric: took 292.348µs to acquireMachinesLock for "offline-docker-947000"
	I0319 13:42:29.986605   11323 start.go:96] Skipping create...Using existing machine configuration
	I0319 13:42:29.986620   11323 fix.go:54] fixHost starting: 
	I0319 13:42:29.987116   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:30.037870   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:30.037915   11323 fix.go:112] recreateIfNeeded on offline-docker-947000: state= err=unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:30.037930   11323 fix.go:117] machineExists: false. err=machine does not exist
	I0319 13:42:30.059830   11323 out.go:177] * docker "offline-docker-947000" container is missing, will recreate.
	I0319 13:42:30.103485   11323 delete.go:124] DEMOLISHING offline-docker-947000 ...
	I0319 13:42:30.103669   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:30.155151   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	W0319 13:42:30.155205   11323 stop.go:83] unable to get state: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:30.155228   11323 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:30.155609   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:30.204830   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:30.204884   11323 delete.go:82] Unable to get host status for offline-docker-947000, assuming it has already been deleted: state: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:30.204958   11323 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-947000
	W0319 13:42:30.253370   11323 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-947000 returned with exit code 1
	I0319 13:42:30.253414   11323 kic.go:371] could not find the container offline-docker-947000 to remove it. will try anyways
	I0319 13:42:30.253485   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:30.302475   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	W0319 13:42:30.302530   11323 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:30.302607   11323 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-947000 /bin/bash -c "sudo init 0"
	W0319 13:42:30.351561   11323 cli_runner.go:211] docker exec --privileged -t offline-docker-947000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0319 13:42:30.351612   11323 oci.go:650] error shutdown offline-docker-947000: docker exec --privileged -t offline-docker-947000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:31.352405   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:31.405735   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:31.405785   11323 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:31.405799   11323 oci.go:664] temporary error: container offline-docker-947000 status is  but expect it to be exited
	I0319 13:42:31.405822   11323 retry.go:31] will retry after 530.564836ms: couldn't verify container is exited. %v: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:31.936735   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:31.989230   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:31.989281   11323 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:31.989290   11323 oci.go:664] temporary error: container offline-docker-947000 status is  but expect it to be exited
	I0319 13:42:31.989309   11323 retry.go:31] will retry after 577.859245ms: couldn't verify container is exited. %v: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:32.568912   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:32.620204   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:32.620253   11323 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:32.620263   11323 oci.go:664] temporary error: container offline-docker-947000 status is  but expect it to be exited
	I0319 13:42:32.620288   11323 retry.go:31] will retry after 1.245423098s: couldn't verify container is exited. %v: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:33.866590   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:33.919381   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:33.919430   11323 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:33.919440   11323 oci.go:664] temporary error: container offline-docker-947000 status is  but expect it to be exited
	I0319 13:42:33.919474   11323 retry.go:31] will retry after 2.056851181s: couldn't verify container is exited. %v: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:35.977701   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:36.029431   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:36.029479   11323 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:36.029488   11323 oci.go:664] temporary error: container offline-docker-947000 status is  but expect it to be exited
	I0319 13:42:36.029516   11323 retry.go:31] will retry after 2.219308496s: couldn't verify container is exited. %v: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:38.251198   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:38.304289   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:38.304344   11323 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:38.304353   11323 oci.go:664] temporary error: container offline-docker-947000 status is  but expect it to be exited
	I0319 13:42:38.304375   11323 retry.go:31] will retry after 4.801696766s: couldn't verify container is exited. %v: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:43.108507   11323 cli_runner.go:164] Run: docker container inspect offline-docker-947000 --format={{.State.Status}}
	W0319 13:42:43.161333   11323 cli_runner.go:211] docker container inspect offline-docker-947000 --format={{.State.Status}} returned with exit code 1
	I0319 13:42:43.161396   11323 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:42:43.161412   11323 oci.go:664] temporary error: container offline-docker-947000 status is  but expect it to be exited
	I0319 13:42:43.161459   11323 oci.go:88] couldn't shut down offline-docker-947000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	 
	I0319 13:42:43.161532   11323 cli_runner.go:164] Run: docker rm -f -v offline-docker-947000
	I0319 13:42:43.212940   11323 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-947000
	W0319 13:42:43.262850   11323 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-947000 returned with exit code 1
	I0319 13:42:43.262973   11323 cli_runner.go:164] Run: docker network inspect offline-docker-947000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:42:43.312706   11323 cli_runner.go:164] Run: docker network rm offline-docker-947000
	I0319 13:42:43.420554   11323 fix.go:124] Sleeping 1 second for extra luck!
	I0319 13:42:44.422199   11323 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:42:44.445434   11323 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0319 13:42:44.445609   11323 start.go:159] libmachine.API.Create for "offline-docker-947000" (driver="docker")
	I0319 13:42:44.445638   11323 client.go:168] LocalClient.Create starting
	I0319 13:42:44.445849   11323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:42:44.445949   11323 main.go:141] libmachine: Decoding PEM data...
	I0319 13:42:44.445974   11323 main.go:141] libmachine: Parsing certificate...
	I0319 13:42:44.446057   11323 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:42:44.446138   11323 main.go:141] libmachine: Decoding PEM data...
	I0319 13:42:44.446153   11323 main.go:141] libmachine: Parsing certificate...
	I0319 13:42:44.447020   11323 cli_runner.go:164] Run: docker network inspect offline-docker-947000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:42:44.498874   11323 cli_runner.go:211] docker network inspect offline-docker-947000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:42:44.498980   11323 network_create.go:281] running [docker network inspect offline-docker-947000] to gather additional debugging logs...
	I0319 13:42:44.499001   11323 cli_runner.go:164] Run: docker network inspect offline-docker-947000
	W0319 13:42:44.547588   11323 cli_runner.go:211] docker network inspect offline-docker-947000 returned with exit code 1
	I0319 13:42:44.547619   11323 network_create.go:284] error running [docker network inspect offline-docker-947000]: docker network inspect offline-docker-947000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-947000 not found
	I0319 13:42:44.547634   11323 network_create.go:286] output of [docker network inspect offline-docker-947000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-947000 not found
	
	** /stderr **
	I0319 13:42:44.547775   11323 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:42:44.598690   11323 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:42:44.600103   11323 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:42:44.601653   11323 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:42:44.603193   11323 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:42:44.604884   11323 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:42:44.605285   11323 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021ab670}
	I0319 13:42:44.605302   11323 network_create.go:124] attempt to create docker network offline-docker-947000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0319 13:42:44.605370   11323 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-947000 offline-docker-947000
	I0319 13:42:44.691237   11323 network_create.go:108] docker network offline-docker-947000 192.168.94.0/24 created
	I0319 13:42:44.691272   11323 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-947000" container
	I0319 13:42:44.691376   11323 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:42:44.743482   11323 cli_runner.go:164] Run: docker volume create offline-docker-947000 --label name.minikube.sigs.k8s.io=offline-docker-947000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:42:44.793064   11323 oci.go:103] Successfully created a docker volume offline-docker-947000
	I0319 13:42:44.793175   11323 cli_runner.go:164] Run: docker run --rm --name offline-docker-947000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-947000 --entrypoint /usr/bin/test -v offline-docker-947000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:42:45.087627   11323 oci.go:107] Successfully prepared a docker volume offline-docker-947000
	I0319 13:42:45.087653   11323 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:42:45.087666   11323 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:42:45.087764   11323 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-947000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 13:48:44.445772   11323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:48:44.445900   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:44.499680   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:44.499791   11323 retry.go:31] will retry after 214.247085ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:44.714420   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:44.767711   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:44.767814   11323 retry.go:31] will retry after 394.240512ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:45.164415   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:45.217045   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:45.217165   11323 retry.go:31] will retry after 824.851579ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:46.042508   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:46.101650   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	W0319 13:48:46.101771   11323 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	
	W0319 13:48:46.101790   11323 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:46.101852   11323 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:48:46.101917   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:46.150885   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:46.150987   11323 retry.go:31] will retry after 316.346523ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:46.469710   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:46.520753   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:46.520843   11323 retry.go:31] will retry after 488.786328ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:47.009815   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:47.060990   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:47.061096   11323 retry.go:31] will retry after 710.550325ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:47.772912   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:47.825228   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	W0319 13:48:47.825337   11323 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	
	W0319 13:48:47.825361   11323 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:47.825373   11323 start.go:128] duration metric: took 6m3.404208478s to createHost
	I0319 13:48:47.825437   11323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:48:47.825489   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:47.874824   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:47.874926   11323 retry.go:31] will retry after 207.849427ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:48.083325   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:48.134395   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:48.134504   11323 retry.go:31] will retry after 335.188705ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:48.471917   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:48.523562   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:48.523651   11323 retry.go:31] will retry after 573.385695ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:49.099243   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:49.150029   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	W0319 13:48:49.150131   11323 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	
	W0319 13:48:49.150150   11323 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:49.150210   11323 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:48:49.150271   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:49.199087   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:49.199182   11323 retry.go:31] will retry after 370.597638ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:49.571361   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:49.622981   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:49.623076   11323 retry.go:31] will retry after 451.705809ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:50.077124   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:50.128795   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	I0319 13:48:50.128889   11323 retry.go:31] will retry after 582.044669ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:50.711457   11323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000
	W0319 13:48:50.762017   11323 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000 returned with exit code 1
	W0319 13:48:50.762121   11323 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	
	W0319 13:48:50.762146   11323 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-947000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-947000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000
	I0319 13:48:50.762155   11323 fix.go:56] duration metric: took 6m20.776689694s for fixHost
	I0319 13:48:50.762164   11323 start.go:83] releasing machines lock for "offline-docker-947000", held for 6m20.776749849s
	W0319 13:48:50.762237   11323 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-947000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-947000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0319 13:48:50.803915   11323 out.go:177] 
	W0319 13:48:50.826299   11323 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0319 13:48:50.826358   11323 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0319 13:48:50.826382   11323 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0319 13:48:50.848193   11323 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-947000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-03-19 13:48:50.922505 -0700 PDT m=+6281.358558119
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-947000
helpers_test.go:235: (dbg) docker inspect offline-docker-947000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-947000",
	        "Id": "98be92d4947a9b4f8aea87766becda447d208c23e65388661d4339661cb090f5",
	        "Created": "2024-03-19T20:42:44.652124442Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-947000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-947000 -n offline-docker-947000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-947000 -n offline-docker-947000: exit status 7 (126.376278ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:48:51.100914   11894 status.go:249] status error: host: state: unknown state "offline-docker-947000": docker container inspect offline-docker-947000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-947000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-947000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-947000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-947000
--- FAIL: TestOffline (751.25s)

                                                
                                    
x
+
TestCertOptions (7200.733s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-079000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (2m45s)
	TestCertOptions (1m41s)
	TestNetworkPlugins (27m49s)

                                                
                                                
goroutine 2478 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 15 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0009a2d00, 0xc000a69bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc00083e7e0, {0x7242240, 0x2a, 0x2a}, {0x2efbbc5?, 0x498a0c4?, 0x72645c0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00066c460)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00066c460)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000664b80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2481 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4eba0b98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00227d980?, 0xc00089de00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00227d980, {0xc00089de00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002910198, {0xc00089de00?, 0xc000704a80?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028c85d0, {0x5f3bed8, 0xc0022a0198})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x5f3c018, 0xc0028c85d0}, {0x5f3bed8, 0xc0022a0198}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0022b5140?, {0x5f3c018, 0xc0028c85d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x7204100?, {0x5f3c018?, 0xc0028c85d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x5f3c018, 0xc0028c85d0}, {0x5f3bf98, 0xc002910198}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000201500?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 585
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1290 [select, 107 minutes]:
net/http.(*persistConn).readLoop(0xc0028e7d40)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1308
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 1812 [syscall, 93 minutes]:
syscall.syscall(0x0?, 0xc0021ec618?, 0x2ee4045?, 0xc0020026b0?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc0020026f0?, 0xc000584e00?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1776
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 15 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 14
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 199 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 198
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 916 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000976740, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 823
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 194 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a7ae00, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 181
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 198 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x5f5fe20, 0xc0000663c0}, 0xc000111750, 0xc000870f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x5f5fe20, 0xc0000663c0}, 0x0?, 0xc000111750, 0xc000111798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x5f5fe20?, 0xc0000663c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0001117d0?, 0x3436d85?, 0xc00227ca80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 194
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 197 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000a7ad90, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x5a4d240?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00227c960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a7ae00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0000680d0, {0x5f3d4c0, 0xc000a7c4b0}, 0x1, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000680d0, 0x3b9aca00, 0x0, 0x1, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 194
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1046 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00278c840, 0xc0026da960)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1045
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 193 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00227ca80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 181
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1266 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002421600, 0xc0022f2240)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1265
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2171 [chan receive, 29 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00290a000, 0xc000a026d8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2102
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 585 [syscall, 2 minutes]:
syscall.syscall6(0xc0028c9f80?, 0x1000000000010?, 0x10000000019?, 0x4eb14448?, 0x90?, 0x7b41108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc002439a40?, 0x2e3c165?, 0x90?, 0x5ea1120?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x2f6cf05?, 0xc002439a74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002acc1e0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000152580)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000152580)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0006d8ea0, 0xc000152580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0006d8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc0006d8ea0, 0x5f312f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 584 [syscall]:
syscall.syscall6(0xc002a09f80?, 0x1000000000010?, 0x10000000019?, 0x4eb77e18?, 0x90?, 0x7b41108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0021bb8a0?, 0x2e3c165?, 0x90?, 0x5ea1120?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x2f6cf05?, 0xc0021bb8d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc00016b770)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0022c8420)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0022c8420)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0006d8680, 0xc0022c8420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0006d8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0006d8680, 0x5f312f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2103 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000dbd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000dbd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0000dbd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0000dbd40, 0x5f313e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1291 [select, 107 minutes]:
net/http.(*persistConn).writeLoop(0xc0028e7d40)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1308
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2196 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00290b380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00290b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00290b380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00290b380, 0xc00238c580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2171
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2104 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00225a000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00225a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc00225a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc00225a000, 0x5f313f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2181 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00225a820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00225a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc00225a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc00225a820, 0x5f31428)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2180 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00225a1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00225a1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc00225a1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc00225a1a0, 0x5f31400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2172 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00290a4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00290a4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00290a4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00290a4e0, 0xc00238c180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2171
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2195 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00290b1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00290b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00290b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00290b1e0, 0xc00238c500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2171
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1250 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc00233d4a0, 0xc0023478c0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 794
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2102 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0000da820, {0x4931d45?, 0x44802521f45?}, 0xc000a026d8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0000da820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0000da820, 0x5f313d8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2464 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4eba0d88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00227d8c0?, 0xc002288a96?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00227d8c0, {0xc002288a96, 0x56a, 0x56a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002910180, {0xc002288a96?, 0xc000501dc0?, 0x22c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028c85a0, {0x5f3bed8, 0xc0022a0180})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x5f3c018, 0xc0028c85a0}, {0x5f3bed8, 0xc0022a0180}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc002941e78?, {0x5f3c018, 0xc0028c85a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x7204100?, {0x5f3c018?, 0xc0028c85a0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x5f3c018, 0xc0028c85a0}, {0x5f3bf98, 0xc002910180}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026da1e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 585
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2193 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00290ad00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00290ad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00290ad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00290ad00, 0xc00238c400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2171
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 680 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x4eba07b8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000664300?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000664300)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000664300)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00083d3a0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00083d3a0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007e20f0, {0x5f53780, 0xc00083d3a0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0007e20f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc002077ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 677
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2482 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000152580, 0xc0022f2780)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 585
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 907 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000976590, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x5a4d240?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002132e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000976740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a69580, {0x5f3d4c0, 0xc0020f6f00}, 0x1, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a69580, 0x3b9aca00, 0x0, 0x1, 0xc0000663c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 916
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2176 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00290ab60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00290ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00290ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00290ab60, 0xc00238c380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2171
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2175 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00290a9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00290a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00290a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00290a9c0, 0xc00238c300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2171
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2182 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00225a9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00225a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00225a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc00225a9c0, 0x5f313a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2183 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00225ab60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00225ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc00225ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc00225ab60, 0x5f313b8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2173 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00290a680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00290a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00290a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00290a680, 0xc00238c200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2171
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2194 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00290aea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00290aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00290aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00290aea0, 0xc00238c480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2171
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2166 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00225a340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00225a340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc00225a340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc00225a340, 0x5f31420)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 915 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002132f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 823
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 909 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 908
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 908 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x5f5fe20, 0xc0000663c0}, 0xc00293ef50, 0xc0020c2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x5f5fe20, 0xc0000663c0}, 0x20?, 0xc00293ef50, 0xc00293ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x5f5fe20?, 0xc0000663c0?}, 0xc0000da1a0?, 0x2f6fbc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00293efd0?, 0x2fb5ec4?, 0xc0029b8420?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 916
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2174 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000163950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00290a820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00290a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00290a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00290a820, 0xc00238c280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2171
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1194 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0021d9ce0, 0xc002346000)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1193
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2493 [select]:
os/exec.(*Cmd).watchCtx(0xc0022c8420, 0xc0026da420)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 584
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2491 [IO wait]:
internal/poll.runtime_pollWait(0x4eba05c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002704960?, 0xc00015028d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002704960, {0xc00015028d, 0x573, 0x573})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0022a0210, {0xc00015028d?, 0xc002241180?, 0x223?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002a08720, {0x5f3bed8, 0xc002910178})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x5f3c018, 0xc002a08720}, {0x5f3bed8, 0xc002910178}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc002006e78?, {0x5f3c018, 0xc002a08720})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x7204100?, {0x5f3c018?, 0xc002a08720?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x5f3c018, 0xc002a08720}, {0x5f3bf98, 0xc0022a0210}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026da360?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 584
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2492 [IO wait]:
internal/poll.runtime_pollWait(0x4eba09a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002704a20?, 0xc002068400?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002704a20, {0xc002068400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0022a0240, {0xc002068400?, 0x9?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002a08750, {0x5f3bed8, 0xc002910188})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x5f3c018, 0xc002a08750}, {0x5f3bed8, 0xc002910188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x5f3c018, 0xc002a08750})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x7204100?, {0x5f3c018?, 0xc002a08750?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x5f3c018, 0xc002a08750}, {0x5f3bf98, 0xc0022a0240}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0001f8900?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 584
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                    
x
+
TestDockerFlags (753.25s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-046000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0319 13:49:59.698244    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:50:25.961083    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 13:54:42.788746    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:54:59.729259    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:55:25.993581    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 13:59:59.728008    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 14:00:09.040707    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 14:00:25.992115    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-046000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m31.928536801s)

                                                
                                                
-- stdout --
	* [docker-flags-046000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-046000" primary control-plane node in "docker-flags-046000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-046000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:49:55.027828   12046 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:49:55.028534   12046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:49:55.028543   12046 out.go:304] Setting ErrFile to fd 2...
	I0319 13:49:55.028549   12046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:49:55.028917   12046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:49:55.030690   12046 out.go:298] Setting JSON to false
	I0319 13:49:55.053191   12046 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6566,"bootTime":1710874829,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 13:49:55.053287   12046 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 13:49:55.075228   12046 out.go:177] * [docker-flags-046000] minikube v1.32.0 on Darwin 14.3.1
	I0319 13:49:55.118791   12046 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 13:49:55.118849   12046 notify.go:220] Checking for updates...
	I0319 13:49:55.162784   12046 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 13:49:55.183942   12046 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 13:49:55.205091   12046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 13:49:55.226983   12046 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	I0319 13:49:55.248729   12046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 13:49:55.270677   12046 config.go:182] Loaded profile config "force-systemd-flag-509000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:49:55.270853   12046 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 13:49:55.326661   12046 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 13:49:55.326844   12046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:49:55.427153   12046 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:211 SystemTime:2024-03-19 20:49:55.417082433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined nam
e=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:49:55.469642   12046 out.go:177] * Using the docker driver based on user configuration
	I0319 13:49:55.490962   12046 start.go:297] selected driver: docker
	I0319 13:49:55.490988   12046 start.go:901] validating driver "docker" against <nil>
	I0319 13:49:55.491003   12046 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 13:49:55.495523   12046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:49:55.593484   12046 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:211 SystemTime:2024-03-19 20:49:55.584028568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined nam
e=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:49:55.593654   12046 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 13:49:55.593846   12046 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0319 13:49:55.615333   12046 out.go:177] * Using Docker Desktop driver with root privileges
	I0319 13:49:55.636873   12046 cni.go:84] Creating CNI manager for ""
	I0319 13:49:55.636920   12046 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0319 13:49:55.636934   12046 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 13:49:55.637028   12046 start.go:340] cluster config:
	{Name:docker-flags-046000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-046000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 13:49:55.659076   12046 out.go:177] * Starting "docker-flags-046000" primary control-plane node in "docker-flags-046000" cluster
	I0319 13:49:55.702646   12046 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 13:49:55.724968   12046 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0319 13:49:55.747021   12046 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:49:55.747068   12046 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 13:49:55.747124   12046 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0319 13:49:55.747142   12046 cache.go:56] Caching tarball of preloaded images
	I0319 13:49:55.747380   12046 preload.go:173] Found /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0319 13:49:55.747399   12046 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0319 13:49:55.748315   12046 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/docker-flags-046000/config.json ...
	I0319 13:49:55.748573   12046 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/docker-flags-046000/config.json: {Name:mke3ea4ec44cbfa2a46e8d5013cd93f6ecd8c6de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 13:49:55.798734   12046 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0319 13:49:55.798749   12046 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0319 13:49:55.798766   12046 cache.go:194] Successfully downloaded all kic artifacts
	I0319 13:49:55.798802   12046 start.go:360] acquireMachinesLock for docker-flags-046000: {Name:mkfd0b6d7a69100a21c369f0e2ec4fed3eedb3b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:49:55.798945   12046 start.go:364] duration metric: took 131.981µs to acquireMachinesLock for "docker-flags-046000"
	I0319 13:49:55.798969   12046 start.go:93] Provisioning new machine with config: &{Name:docker-flags-046000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-046000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0319 13:49:55.799058   12046 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:49:55.841935   12046 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0319 13:49:55.842314   12046 start.go:159] libmachine.API.Create for "docker-flags-046000" (driver="docker")
	I0319 13:49:55.842366   12046 client.go:168] LocalClient.Create starting
	I0319 13:49:55.842532   12046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:49:55.842623   12046 main.go:141] libmachine: Decoding PEM data...
	I0319 13:49:55.842652   12046 main.go:141] libmachine: Parsing certificate...
	I0319 13:49:55.842749   12046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:49:55.842820   12046 main.go:141] libmachine: Decoding PEM data...
	I0319 13:49:55.842837   12046 main.go:141] libmachine: Parsing certificate...
	I0319 13:49:55.843898   12046 cli_runner.go:164] Run: docker network inspect docker-flags-046000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:49:55.894366   12046 cli_runner.go:211] docker network inspect docker-flags-046000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:49:55.894475   12046 network_create.go:281] running [docker network inspect docker-flags-046000] to gather additional debugging logs...
	I0319 13:49:55.894495   12046 cli_runner.go:164] Run: docker network inspect docker-flags-046000
	W0319 13:49:55.944452   12046 cli_runner.go:211] docker network inspect docker-flags-046000 returned with exit code 1
	I0319 13:49:55.944479   12046 network_create.go:284] error running [docker network inspect docker-flags-046000]: docker network inspect docker-flags-046000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-046000 not found
	I0319 13:49:55.944499   12046 network_create.go:286] output of [docker network inspect docker-flags-046000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-046000 not found
	
	** /stderr **
	I0319 13:49:55.944638   12046 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:49:55.996457   12046 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:49:55.998107   12046 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:49:55.999714   12046 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:49:56.001417   12046 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:49:56.001795   12046 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002402e70}
	I0319 13:49:56.001836   12046 network_create.go:124] attempt to create docker network docker-flags-046000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0319 13:49:56.001947   12046 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-046000 docker-flags-046000
	I0319 13:49:56.087291   12046 network_create.go:108] docker network docker-flags-046000 192.168.85.0/24 created
	I0319 13:49:56.087330   12046 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-046000" container
	I0319 13:49:56.087429   12046 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:49:56.138314   12046 cli_runner.go:164] Run: docker volume create docker-flags-046000 --label name.minikube.sigs.k8s.io=docker-flags-046000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:49:56.188492   12046 oci.go:103] Successfully created a docker volume docker-flags-046000
	I0319 13:49:56.188626   12046 cli_runner.go:164] Run: docker run --rm --name docker-flags-046000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-046000 --entrypoint /usr/bin/test -v docker-flags-046000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:49:56.580908   12046 oci.go:107] Successfully prepared a docker volume docker-flags-046000
	I0319 13:49:56.580946   12046 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:49:56.580960   12046 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:49:56.581075   12046 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-046000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 13:55:55.876239   12046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:55:55.876382   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 13:55:55.929364   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 13:55:55.929491   12046 retry.go:31] will retry after 144.187414ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:55:56.076140   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 13:55:56.129424   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 13:55:56.129528   12046 retry.go:31] will retry after 368.600888ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:55:56.500615   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 13:55:56.553167   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 13:55:56.553276   12046 retry.go:31] will retry after 808.067631ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:55:57.363228   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 13:55:57.416303   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	W0319 13:55:57.416410   12046 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	
	W0319 13:55:57.416434   12046 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:55:57.416500   12046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:55:57.416560   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 13:55:57.465854   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 13:55:57.465944   12046 retry.go:31] will retry after 159.673104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:55:57.626050   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 13:55:57.676783   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 13:55:57.676881   12046 retry.go:31] will retry after 476.543823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:55:58.154152   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 13:55:58.207839   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 13:55:58.207934   12046 retry.go:31] will retry after 677.704773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:55:58.886883   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 13:55:58.940242   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	W0319 13:55:58.940347   12046 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	
	W0319 13:55:58.940369   12046 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:55:58.940385   12046 start.go:128] duration metric: took 6m3.109112572s to createHost
	I0319 13:55:58.940393   12046 start.go:83] releasing machines lock for "docker-flags-046000", held for 6m3.109241576s
	W0319 13:55:58.940409   12046 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0319 13:55:58.940854   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:55:58.989760   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:58.989826   12046 delete.go:82] Unable to get host status for docker-flags-046000, assuming it has already been deleted: state: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	W0319 13:55:58.989916   12046 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0319 13:55:58.989943   12046 start.go:728] Will try again in 5 seconds ...
	I0319 13:56:03.991319   12046 start.go:360] acquireMachinesLock for docker-flags-046000: {Name:mkfd0b6d7a69100a21c369f0e2ec4fed3eedb3b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:56:03.991627   12046 start.go:364] duration metric: took 138.595µs to acquireMachinesLock for "docker-flags-046000"
	I0319 13:56:03.991658   12046 start.go:96] Skipping create...Using existing machine configuration
	I0319 13:56:03.991669   12046 fix.go:54] fixHost starting: 
	I0319 13:56:03.992037   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:04.044452   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:04.044515   12046 fix.go:112] recreateIfNeeded on docker-flags-046000: state= err=unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:04.044532   12046 fix.go:117] machineExists: false. err=machine does not exist
	I0319 13:56:04.066250   12046 out.go:177] * docker "docker-flags-046000" container is missing, will recreate.
	I0319 13:56:04.108980   12046 delete.go:124] DEMOLISHING docker-flags-046000 ...
	I0319 13:56:04.109163   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:04.160004   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	W0319 13:56:04.160066   12046 stop.go:83] unable to get state: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:04.160081   12046 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:04.160452   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:04.209940   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:04.210000   12046 delete.go:82] Unable to get host status for docker-flags-046000, assuming it has already been deleted: state: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:04.210093   12046 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-046000
	W0319 13:56:04.259534   12046 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-046000 returned with exit code 1
	I0319 13:56:04.259581   12046 kic.go:371] could not find the container docker-flags-046000 to remove it. will try anyways
	I0319 13:56:04.259661   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:04.308598   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	W0319 13:56:04.308648   12046 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:04.308729   12046 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-046000 /bin/bash -c "sudo init 0"
	W0319 13:56:04.358186   12046 cli_runner.go:211] docker exec --privileged -t docker-flags-046000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0319 13:56:04.358223   12046 oci.go:650] error shutdown docker-flags-046000: docker exec --privileged -t docker-flags-046000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:05.359046   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:05.412353   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:05.412419   12046 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:05.412427   12046 oci.go:664] temporary error: container docker-flags-046000 status is  but expect it to be exited
	I0319 13:56:05.412450   12046 retry.go:31] will retry after 606.28872ms: couldn't verify container is exited. %v: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:06.019356   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:06.069218   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:06.069265   12046 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:06.069279   12046 oci.go:664] temporary error: container docker-flags-046000 status is  but expect it to be exited
	I0319 13:56:06.069306   12046 retry.go:31] will retry after 622.212709ms: couldn't verify container is exited. %v: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:06.692818   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:06.746619   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:06.746668   12046 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:06.746678   12046 oci.go:664] temporary error: container docker-flags-046000 status is  but expect it to be exited
	I0319 13:56:06.746701   12046 retry.go:31] will retry after 895.240137ms: couldn't verify container is exited. %v: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:07.642645   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:07.693467   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:07.693522   12046 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:07.693535   12046 oci.go:664] temporary error: container docker-flags-046000 status is  but expect it to be exited
	I0319 13:56:07.693561   12046 retry.go:31] will retry after 1.137688362s: couldn't verify container is exited. %v: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:08.832507   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:08.883879   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:08.883929   12046 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:08.883938   12046 oci.go:664] temporary error: container docker-flags-046000 status is  but expect it to be exited
	I0319 13:56:08.883967   12046 retry.go:31] will retry after 2.655464408s: couldn't verify container is exited. %v: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:11.541114   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:11.594540   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:11.594589   12046 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:11.594599   12046 oci.go:664] temporary error: container docker-flags-046000 status is  but expect it to be exited
	I0319 13:56:11.594626   12046 retry.go:31] will retry after 4.148382334s: couldn't verify container is exited. %v: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:15.743779   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:15.795885   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:15.795936   12046 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:15.795947   12046 oci.go:664] temporary error: container docker-flags-046000 status is  but expect it to be exited
	I0319 13:56:15.795968   12046 retry.go:31] will retry after 3.651579423s: couldn't verify container is exited. %v: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:19.447813   12046 cli_runner.go:164] Run: docker container inspect docker-flags-046000 --format={{.State.Status}}
	W0319 13:56:19.500456   12046 cli_runner.go:211] docker container inspect docker-flags-046000 --format={{.State.Status}} returned with exit code 1
	I0319 13:56:19.500505   12046 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 13:56:19.500514   12046 oci.go:664] temporary error: container docker-flags-046000 status is  but expect it to be exited
	I0319 13:56:19.500546   12046 oci.go:88] couldn't shut down docker-flags-046000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	 
	I0319 13:56:19.500626   12046 cli_runner.go:164] Run: docker rm -f -v docker-flags-046000
	I0319 13:56:19.551102   12046 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-046000
	W0319 13:56:19.599921   12046 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-046000 returned with exit code 1
	I0319 13:56:19.600029   12046 cli_runner.go:164] Run: docker network inspect docker-flags-046000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:56:19.649576   12046 cli_runner.go:164] Run: docker network rm docker-flags-046000
	I0319 13:56:19.753060   12046 fix.go:124] Sleeping 1 second for extra luck!
	I0319 13:56:20.754378   12046 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:56:20.776194   12046 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0319 13:56:20.776360   12046 start.go:159] libmachine.API.Create for "docker-flags-046000" (driver="docker")
	I0319 13:56:20.776384   12046 client.go:168] LocalClient.Create starting
	I0319 13:56:20.776592   12046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:56:20.776681   12046 main.go:141] libmachine: Decoding PEM data...
	I0319 13:56:20.776706   12046 main.go:141] libmachine: Parsing certificate...
	I0319 13:56:20.776793   12046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:56:20.776865   12046 main.go:141] libmachine: Decoding PEM data...
	I0319 13:56:20.776881   12046 main.go:141] libmachine: Parsing certificate...
	I0319 13:56:20.777691   12046 cli_runner.go:164] Run: docker network inspect docker-flags-046000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:56:20.830887   12046 cli_runner.go:211] docker network inspect docker-flags-046000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:56:20.830987   12046 network_create.go:281] running [docker network inspect docker-flags-046000] to gather additional debugging logs...
	I0319 13:56:20.831008   12046 cli_runner.go:164] Run: docker network inspect docker-flags-046000
	W0319 13:56:20.880395   12046 cli_runner.go:211] docker network inspect docker-flags-046000 returned with exit code 1
	I0319 13:56:20.880433   12046 network_create.go:284] error running [docker network inspect docker-flags-046000]: docker network inspect docker-flags-046000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-046000 not found
	I0319 13:56:20.880446   12046 network_create.go:286] output of [docker network inspect docker-flags-046000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-046000 not found
	
	** /stderr **
	I0319 13:56:20.880595   12046 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:56:20.931489   12046 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:56:20.932832   12046 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:56:20.934403   12046 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:56:20.935773   12046 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:56:20.937391   12046 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:56:20.938961   12046 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:56:20.939344   12046 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002403940}
	I0319 13:56:20.939357   12046 network_create.go:124] attempt to create docker network docker-flags-046000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0319 13:56:20.939420   12046 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-046000 docker-flags-046000
	I0319 13:56:21.025682   12046 network_create.go:108] docker network docker-flags-046000 192.168.103.0/24 created
	I0319 13:56:21.025716   12046 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-046000" container
	I0319 13:56:21.025824   12046 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:56:21.077585   12046 cli_runner.go:164] Run: docker volume create docker-flags-046000 --label name.minikube.sigs.k8s.io=docker-flags-046000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:56:21.126375   12046 oci.go:103] Successfully created a docker volume docker-flags-046000
	I0319 13:56:21.126520   12046 cli_runner.go:164] Run: docker run --rm --name docker-flags-046000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-046000 --entrypoint /usr/bin/test -v docker-flags-046000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:56:21.438192   12046 oci.go:107] Successfully prepared a docker volume docker-flags-046000
	I0319 13:56:21.438222   12046 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:56:21.438236   12046 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:56:21.438354   12046 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-046000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 14:02:20.775172   12046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 14:02:20.775295   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:20.828191   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:20.828301   12046 retry.go:31] will retry after 184.069853ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:21.013048   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:21.064544   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:21.064661   12046 retry.go:31] will retry after 436.429168ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:21.502833   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:21.554679   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:21.554773   12046 retry.go:31] will retry after 471.290807ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:22.027959   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:22.080938   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	W0319 14:02:22.081040   12046 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	
	W0319 14:02:22.081065   12046 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:22.081121   12046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 14:02:22.081179   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:22.131694   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:22.131808   12046 retry.go:31] will retry after 172.65012ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:22.305209   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:22.357876   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:22.357984   12046 retry.go:31] will retry after 364.316091ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:22.723902   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:22.775679   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:22.775780   12046 retry.go:31] will retry after 652.0926ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:23.428253   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:23.480329   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	W0319 14:02:23.480437   12046 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	
	W0319 14:02:23.480455   12046 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:23.480467   12046 start.go:128] duration metric: took 6m2.727804336s to createHost
	I0319 14:02:23.480535   12046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 14:02:23.480593   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:23.531280   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:23.531371   12046 retry.go:31] will retry after 249.474561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:23.781611   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:23.832661   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:23.832753   12046 retry.go:31] will retry after 222.010984ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:24.055367   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:24.106272   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:24.106363   12046 retry.go:31] will retry after 329.795264ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:24.436862   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:24.490224   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:24.490307   12046 retry.go:31] will retry after 434.041991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:24.926758   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:24.979799   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	W0319 14:02:24.979894   12046 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	
	W0319 14:02:24.979916   12046 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:24.979976   12046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 14:02:24.980044   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:25.030380   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:25.030471   12046 retry.go:31] will retry after 305.776967ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:25.338061   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:25.391596   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:25.391691   12046 retry.go:31] will retry after 545.965504ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:25.938121   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:25.990688   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	I0319 14:02:25.990792   12046 retry.go:31] will retry after 729.769288ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:26.722397   12046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000
	W0319 14:02:26.773599   12046 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000 returned with exit code 1
	W0319 14:02:26.773692   12046 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	
	W0319 14:02:26.773715   12046 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-046000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-046000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	I0319 14:02:26.773725   12046 fix.go:56] duration metric: took 6m22.783900994s for fixHost
	I0319 14:02:26.773732   12046 start.go:83] releasing machines lock for "docker-flags-046000", held for 6m22.783937784s
	W0319 14:02:26.773806   12046 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-046000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-046000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0319 14:02:26.817321   12046 out.go:177] 
	W0319 14:02:26.839322   12046 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0319 14:02:26.839351   12046 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0319 14:02:26.839369   12046 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0319 14:02:26.882158   12046 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-046000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-046000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-046000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (202.975603ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-046000 host status: state: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-046000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-046000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-046000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (200.646878ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-046000 host status: state: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-046000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-046000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-19 14:02:27.352503 -0700 PDT m=+7097.758423879
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-046000
helpers_test.go:235: (dbg) docker inspect docker-flags-046000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-046000",
	        "Id": "71e918c47e08e06c0a67032eabb713e18976968e4eac20db897e684c7afa27f2",
	        "Created": "2024-03-19T20:56:20.986333434Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-046000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-046000 -n docker-flags-046000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-046000 -n docker-flags-046000: exit status 7 (112.587418ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 14:02:27.516944   12556 status.go:249] status error: host: state: unknown state "docker-flags-046000": docker container inspect docker-flags-046000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-046000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-046000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-046000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-046000
--- FAIL: TestDockerFlags (753.25s)

                                                
                                    
x
+
TestForceSystemdFlag (752.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-509000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-509000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m31.830907908s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-509000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-509000" primary control-plane node in "force-systemd-flag-509000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-509000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:48:51.902237   11918 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:48:51.902503   11918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:48:51.902508   11918 out.go:304] Setting ErrFile to fd 2...
	I0319 13:48:51.902512   11918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:48:51.903161   11918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:48:51.905031   11918 out.go:298] Setting JSON to false
	I0319 13:48:51.927615   11918 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6502,"bootTime":1710874829,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 13:48:51.927715   11918 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 13:48:51.948876   11918 out.go:177] * [force-systemd-flag-509000] minikube v1.32.0 on Darwin 14.3.1
	I0319 13:48:51.992127   11918 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 13:48:51.992198   11918 notify.go:220] Checking for updates...
	I0319 13:48:52.035039   11918 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 13:48:52.078885   11918 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 13:48:52.100139   11918 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 13:48:52.122220   11918 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	I0319 13:48:52.144125   11918 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 13:48:52.167062   11918 config.go:182] Loaded profile config "force-systemd-env-506000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:48:52.167250   11918 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 13:48:52.223976   11918 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 13:48:52.224127   11918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:48:52.321449   11918 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:false NGoroutines:201 SystemTime:2024-03-19 20:48:52.311452512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined nam
e=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:48:52.343437   11918 out.go:177] * Using the docker driver based on user configuration
	I0319 13:48:52.364946   11918 start.go:297] selected driver: docker
	I0319 13:48:52.364971   11918 start.go:901] validating driver "docker" against <nil>
	I0319 13:48:52.364986   11918 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 13:48:52.368971   11918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:48:52.468899   11918 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:false NGoroutines:201 SystemTime:2024-03-19 20:48:52.459203432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined nam
e=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:48:52.469115   11918 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 13:48:52.469315   11918 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 13:48:52.491044   11918 out.go:177] * Using Docker Desktop driver with root privileges
	I0319 13:48:52.513154   11918 cni.go:84] Creating CNI manager for ""
	I0319 13:48:52.513201   11918 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0319 13:48:52.513221   11918 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 13:48:52.513317   11918 start.go:340] cluster config:
	{Name:force-systemd-flag-509000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 13:48:52.534792   11918 out.go:177] * Starting "force-systemd-flag-509000" primary control-plane node in "force-systemd-flag-509000" cluster
	I0319 13:48:52.578979   11918 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 13:48:52.600624   11918 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0319 13:48:52.643859   11918 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:48:52.643919   11918 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 13:48:52.643945   11918 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0319 13:48:52.643963   11918 cache.go:56] Caching tarball of preloaded images
	I0319 13:48:52.644184   11918 preload.go:173] Found /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0319 13:48:52.644205   11918 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0319 13:48:52.644347   11918 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/force-systemd-flag-509000/config.json ...
	I0319 13:48:52.644399   11918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/force-systemd-flag-509000/config.json: {Name:mk4bab7041a4d7135176197d7735e89c8d30e055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 13:48:52.694727   11918 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0319 13:48:52.694750   11918 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0319 13:48:52.694767   11918 cache.go:194] Successfully downloaded all kic artifacts
	I0319 13:48:52.694807   11918 start.go:360] acquireMachinesLock for force-systemd-flag-509000: {Name:mk5bbb21f642d77f39af40d4fdfd93723ece0ece Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:48:52.694957   11918 start.go:364] duration metric: took 137.855µs to acquireMachinesLock for "force-systemd-flag-509000"
	I0319 13:48:52.694982   11918 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-509000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-509000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0319 13:48:52.695029   11918 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:48:52.738028   11918 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0319 13:48:52.738394   11918 start.go:159] libmachine.API.Create for "force-systemd-flag-509000" (driver="docker")
	I0319 13:48:52.738438   11918 client.go:168] LocalClient.Create starting
	I0319 13:48:52.738616   11918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:48:52.738709   11918 main.go:141] libmachine: Decoding PEM data...
	I0319 13:48:52.738744   11918 main.go:141] libmachine: Parsing certificate...
	I0319 13:48:52.738834   11918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:48:52.738902   11918 main.go:141] libmachine: Decoding PEM data...
	I0319 13:48:52.738918   11918 main.go:141] libmachine: Parsing certificate...
	I0319 13:48:52.739925   11918 cli_runner.go:164] Run: docker network inspect force-systemd-flag-509000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:48:52.790101   11918 cli_runner.go:211] docker network inspect force-systemd-flag-509000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:48:52.790203   11918 network_create.go:281] running [docker network inspect force-systemd-flag-509000] to gather additional debugging logs...
	I0319 13:48:52.790222   11918 cli_runner.go:164] Run: docker network inspect force-systemd-flag-509000
	W0319 13:48:52.839660   11918 cli_runner.go:211] docker network inspect force-systemd-flag-509000 returned with exit code 1
	I0319 13:48:52.839691   11918 network_create.go:284] error running [docker network inspect force-systemd-flag-509000]: docker network inspect force-systemd-flag-509000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-509000 not found
	I0319 13:48:52.839703   11918 network_create.go:286] output of [docker network inspect force-systemd-flag-509000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-509000 not found
	
	** /stderr **
	I0319 13:48:52.839822   11918 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:48:52.891027   11918 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:48:52.892666   11918 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:48:52.893027   11918 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022b4270}
	I0319 13:48:52.893045   11918 network_create.go:124] attempt to create docker network force-systemd-flag-509000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0319 13:48:52.893114   11918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-509000 force-systemd-flag-509000
	W0319 13:48:52.942632   11918 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-509000 force-systemd-flag-509000 returned with exit code 1
	W0319 13:48:52.942679   11918 network_create.go:149] failed to create docker network force-systemd-flag-509000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-509000 force-systemd-flag-509000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0319 13:48:52.942701   11918 network_create.go:116] failed to create docker network force-systemd-flag-509000 192.168.67.0/24, will retry: subnet is taken
	I0319 13:48:52.944092   11918 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:48:52.944468   11918 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00231c720}
	I0319 13:48:52.944481   11918 network_create.go:124] attempt to create docker network force-systemd-flag-509000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0319 13:48:52.944553   11918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-509000 force-systemd-flag-509000
	I0319 13:48:53.029276   11918 network_create.go:108] docker network force-systemd-flag-509000 192.168.76.0/24 created
	I0319 13:48:53.029313   11918 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-509000" container
	I0319 13:48:53.029435   11918 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:48:53.081432   11918 cli_runner.go:164] Run: docker volume create force-systemd-flag-509000 --label name.minikube.sigs.k8s.io=force-systemd-flag-509000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:48:53.132309   11918 oci.go:103] Successfully created a docker volume force-systemd-flag-509000
	I0319 13:48:53.132439   11918 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-509000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-509000 --entrypoint /usr/bin/test -v force-systemd-flag-509000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:48:53.497882   11918 oci.go:107] Successfully prepared a docker volume force-systemd-flag-509000
	I0319 13:48:53.497930   11918 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:48:53.497943   11918 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:48:53.498052   11918 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-509000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 13:54:52.771362   11918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:54:52.771513   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 13:54:52.824849   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 13:54:52.824981   11918 retry.go:31] will retry after 248.014084ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:54:53.075375   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 13:54:53.127664   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 13:54:53.127759   11918 retry.go:31] will retry after 357.24696ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:54:53.486067   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 13:54:53.538670   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 13:54:53.538783   11918 retry.go:31] will retry after 805.008981ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:54:54.344671   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 13:54:54.394904   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	W0319 13:54:54.395016   11918 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	
	W0319 13:54:54.395041   11918 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:54:54.395106   11918 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:54:54.395186   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 13:54:54.444127   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 13:54:54.444220   11918 retry.go:31] will retry after 297.667001ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:54:54.742414   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 13:54:54.794759   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 13:54:54.794866   11918 retry.go:31] will retry after 200.617634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:54:54.997396   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 13:54:55.049113   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 13:54:55.049205   11918 retry.go:31] will retry after 492.40126ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:54:55.543099   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 13:54:55.594797   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	W0319 13:54:55.594898   11918 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	
	W0319 13:54:55.594916   11918 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:54:55.594929   11918 start.go:128] duration metric: took 6m2.867575195s to createHost
	I0319 13:54:55.594937   11918 start.go:83] releasing machines lock for "force-systemd-flag-509000", held for 6m2.867660895s
	W0319 13:54:55.594952   11918 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0319 13:54:55.595371   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:54:55.646183   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:54:55.646247   11918 delete.go:82] Unable to get host status for force-systemd-flag-509000, assuming it has already been deleted: state: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	W0319 13:54:55.646328   11918 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0319 13:54:55.646340   11918 start.go:728] Will try again in 5 seconds ...
	I0319 13:55:00.647090   11918 start.go:360] acquireMachinesLock for force-systemd-flag-509000: {Name:mk5bbb21f642d77f39af40d4fdfd93723ece0ece Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:55:00.647287   11918 start.go:364] duration metric: took 147.584µs to acquireMachinesLock for "force-systemd-flag-509000"
	I0319 13:55:00.647329   11918 start.go:96] Skipping create...Using existing machine configuration
	I0319 13:55:00.647347   11918 fix.go:54] fixHost starting: 
	I0319 13:55:00.647747   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:00.700706   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:00.700760   11918 fix.go:112] recreateIfNeeded on force-systemd-flag-509000: state= err=unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:00.700784   11918 fix.go:117] machineExists: false. err=machine does not exist
	I0319 13:55:00.743709   11918 out.go:177] * docker "force-systemd-flag-509000" container is missing, will recreate.
	I0319 13:55:00.765793   11918 delete.go:124] DEMOLISHING force-systemd-flag-509000 ...
	I0319 13:55:00.765980   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:00.818079   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	W0319 13:55:00.818137   11918 stop.go:83] unable to get state: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:00.818157   11918 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:00.818535   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:00.867733   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:00.867804   11918 delete.go:82] Unable to get host status for force-systemd-flag-509000, assuming it has already been deleted: state: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:00.867894   11918 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-509000
	W0319 13:55:00.917198   11918 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-509000 returned with exit code 1
	I0319 13:55:00.917235   11918 kic.go:371] could not find the container force-systemd-flag-509000 to remove it. will try anyways
	I0319 13:55:00.917307   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:00.967056   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	W0319 13:55:00.967109   11918 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:00.967196   11918 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-509000 /bin/bash -c "sudo init 0"
	W0319 13:55:01.016109   11918 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-509000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0319 13:55:01.016140   11918 oci.go:650] error shutdown force-systemd-flag-509000: docker exec --privileged -t force-systemd-flag-509000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:02.017325   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:02.068495   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:02.068549   11918 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:02.068561   11918 oci.go:664] temporary error: container force-systemd-flag-509000 status is  but expect it to be exited
	I0319 13:55:02.068588   11918 retry.go:31] will retry after 432.239691ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:02.501296   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:02.553600   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:02.553650   11918 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:02.553665   11918 oci.go:664] temporary error: container force-systemd-flag-509000 status is  but expect it to be exited
	I0319 13:55:02.553686   11918 retry.go:31] will retry after 1.094693983s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:03.648736   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:03.701727   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:03.701779   11918 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:03.701795   11918 oci.go:664] temporary error: container force-systemd-flag-509000 status is  but expect it to be exited
	I0319 13:55:03.701822   11918 retry.go:31] will retry after 748.733012ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:04.451026   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:04.503686   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:04.503732   11918 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:04.503749   11918 oci.go:664] temporary error: container force-systemd-flag-509000 status is  but expect it to be exited
	I0319 13:55:04.503778   11918 retry.go:31] will retry after 2.021960726s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:06.528037   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:06.581414   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:06.581462   11918 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:06.581478   11918 oci.go:664] temporary error: container force-systemd-flag-509000 status is  but expect it to be exited
	I0319 13:55:06.581504   11918 retry.go:31] will retry after 1.798566492s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:08.381334   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:08.434916   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:08.434968   11918 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:08.434980   11918 oci.go:664] temporary error: container force-systemd-flag-509000 status is  but expect it to be exited
	I0319 13:55:08.435006   11918 retry.go:31] will retry after 2.580365354s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:11.016506   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:11.067585   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:11.067634   11918 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:11.067649   11918 oci.go:664] temporary error: container force-systemd-flag-509000 status is  but expect it to be exited
	I0319 13:55:11.067678   11918 retry.go:31] will retry after 4.320649056s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:15.390300   11918 cli_runner.go:164] Run: docker container inspect force-systemd-flag-509000 --format={{.State.Status}}
	W0319 13:55:15.443023   11918 cli_runner.go:211] docker container inspect force-systemd-flag-509000 --format={{.State.Status}} returned with exit code 1
	I0319 13:55:15.443071   11918 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 13:55:15.443083   11918 oci.go:664] temporary error: container force-systemd-flag-509000 status is  but expect it to be exited
	I0319 13:55:15.443112   11918 oci.go:88] couldn't shut down force-systemd-flag-509000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	 
	I0319 13:55:15.443191   11918 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-509000
	I0319 13:55:15.493982   11918 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-509000
	W0319 13:55:15.543467   11918 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-509000 returned with exit code 1
	I0319 13:55:15.543587   11918 cli_runner.go:164] Run: docker network inspect force-systemd-flag-509000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:55:15.593372   11918 cli_runner.go:164] Run: docker network rm force-systemd-flag-509000
	I0319 13:55:15.703730   11918 fix.go:124] Sleeping 1 second for extra luck!
	I0319 13:55:16.705900   11918 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:55:16.729450   11918 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0319 13:55:16.729618   11918 start.go:159] libmachine.API.Create for "force-systemd-flag-509000" (driver="docker")
	I0319 13:55:16.729654   11918 client.go:168] LocalClient.Create starting
	I0319 13:55:16.729874   11918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:55:16.729969   11918 main.go:141] libmachine: Decoding PEM data...
	I0319 13:55:16.729995   11918 main.go:141] libmachine: Parsing certificate...
	I0319 13:55:16.730078   11918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:55:16.730147   11918 main.go:141] libmachine: Decoding PEM data...
	I0319 13:55:16.730163   11918 main.go:141] libmachine: Parsing certificate...
	I0319 13:55:16.731013   11918 cli_runner.go:164] Run: docker network inspect force-systemd-flag-509000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:55:16.783758   11918 cli_runner.go:211] docker network inspect force-systemd-flag-509000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:55:16.783843   11918 network_create.go:281] running [docker network inspect force-systemd-flag-509000] to gather additional debugging logs...
	I0319 13:55:16.783863   11918 cli_runner.go:164] Run: docker network inspect force-systemd-flag-509000
	W0319 13:55:16.835349   11918 cli_runner.go:211] docker network inspect force-systemd-flag-509000 returned with exit code 1
	I0319 13:55:16.835379   11918 network_create.go:284] error running [docker network inspect force-systemd-flag-509000]: docker network inspect force-systemd-flag-509000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-509000 not found
	I0319 13:55:16.835393   11918 network_create.go:286] output of [docker network inspect force-systemd-flag-509000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-509000 not found
	
	** /stderr **
	I0319 13:55:16.835538   11918 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:55:16.969750   11918 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:55:16.971681   11918 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:55:16.973671   11918 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:55:16.975640   11918 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:55:16.977724   11918 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:55:16.978460   11918 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002285ab0}
	I0319 13:55:16.978487   11918 network_create.go:124] attempt to create docker network force-systemd-flag-509000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0319 13:55:16.978623   11918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-509000 force-systemd-flag-509000
	I0319 13:55:17.065962   11918 network_create.go:108] docker network force-systemd-flag-509000 192.168.94.0/24 created
	I0319 13:55:17.066003   11918 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-509000" container
	I0319 13:55:17.066111   11918 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:55:17.117121   11918 cli_runner.go:164] Run: docker volume create force-systemd-flag-509000 --label name.minikube.sigs.k8s.io=force-systemd-flag-509000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:55:17.166841   11918 oci.go:103] Successfully created a docker volume force-systemd-flag-509000
	I0319 13:55:17.166991   11918 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-509000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-509000 --entrypoint /usr/bin/test -v force-systemd-flag-509000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:55:17.485537   11918 oci.go:107] Successfully prepared a docker volume force-systemd-flag-509000
	I0319 13:55:17.485591   11918 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:55:17.485604   11918 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:55:17.485720   11918 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-509000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 14:01:16.729238   11918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 14:01:16.729338   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:16.781507   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:16.781621   11918 retry.go:31] will retry after 259.267893ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:17.042036   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:17.095437   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:17.095539   11918 retry.go:31] will retry after 421.12946ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:17.519077   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:17.571020   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:17.571132   11918 retry.go:31] will retry after 621.836229ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:18.193605   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:18.245869   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	W0319 14:01:18.245976   11918 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	
	W0319 14:01:18.245995   11918 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:18.246070   11918 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 14:01:18.246138   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:18.295944   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:18.296046   11918 retry.go:31] will retry after 365.345842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:18.662303   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:18.713003   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:18.713117   11918 retry.go:31] will retry after 285.318089ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:19.000102   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:19.051289   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:19.051381   11918 retry.go:31] will retry after 692.281862ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:19.744995   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:19.798325   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	W0319 14:01:19.798430   11918 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	
	W0319 14:01:19.798448   11918 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:19.798458   11918 start.go:128] duration metric: took 6m3.094173309s to createHost
	I0319 14:01:19.798534   11918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 14:01:19.798591   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:19.849641   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:19.849755   11918 retry.go:31] will retry after 239.910514ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:20.090008   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:20.141782   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:20.141876   11918 retry.go:31] will retry after 358.719687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:20.501212   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:20.552665   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:20.552757   11918 retry.go:31] will retry after 417.48281ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:20.972014   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:21.023443   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:21.023545   11918 retry.go:31] will retry after 627.479592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:21.653461   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:21.705269   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	W0319 14:01:21.705376   11918 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	
	W0319 14:01:21.705392   11918 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:21.705462   11918 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 14:01:21.705519   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:21.754862   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:21.754958   11918 retry.go:31] will retry after 241.341195ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:21.998664   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:22.050322   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:22.050428   11918 retry.go:31] will retry after 407.939006ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:22.460604   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:22.512167   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:22.512263   11918 retry.go:31] will retry after 316.651612ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:22.831208   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:22.884810   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	I0319 14:01:22.884905   11918 retry.go:31] will retry after 581.732383ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:23.469101   11918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000
	W0319 14:01:23.523544   11918 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000 returned with exit code 1
	W0319 14:01:23.523641   11918 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	
	W0319 14:01:23.523654   11918 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-509000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-509000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	I0319 14:01:23.523667   11918 fix.go:56] duration metric: took 6m22.878165829s for fixHost
	I0319 14:01:23.523676   11918 start.go:83] releasing machines lock for "force-systemd-flag-509000", held for 6m22.878220774s
	W0319 14:01:23.523754   11918 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-509000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-509000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0319 14:01:23.568354   11918 out.go:177] 
	W0319 14:01:23.590384   11918 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0319 14:01:23.590429   11918 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0319 14:01:23.590469   11918 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0319 14:01:23.634322   11918 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-509000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-509000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-509000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (199.226773ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-509000 host status: state: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-509000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-19 14:01:23.914341 -0700 PDT m=+7034.319956733
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-509000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-509000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-509000",
	        "Id": "20b12783363238e18439d49e55f41fa49bbaadfd2b405e53785bca1c47e8ad50",
	        "Created": "2024-03-19T20:55:17.026540134Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-509000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-509000 -n force-systemd-flag-509000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-509000 -n force-systemd-flag-509000: exit status 7 (112.615782ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 14:01:24.078617   12427 status.go:249] status error: host: state: unknown state "force-systemd-flag-509000": docker container inspect force-systemd-flag-509000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-509000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-509000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-509000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-509000
--- FAIL: TestForceSystemdFlag (752.95s)

                                                
                                    
x
+
TestForceSystemdEnv (758.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-506000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0319 13:38:02.757927    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:39:59.700008    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:40:25.961860    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 13:43:29.010469    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 13:44:59.697875    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:45:25.961106    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-506000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m37.848900383s)

                                                
                                                
-- stdout --
	* [force-systemd-env-506000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-506000" primary control-plane node in "force-systemd-env-506000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-506000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:37:16.072900   11542 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:37:16.073164   11542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:37:16.073170   11542 out.go:304] Setting ErrFile to fd 2...
	I0319 13:37:16.073174   11542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:37:16.073349   11542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:37:16.074752   11542 out.go:298] Setting JSON to false
	I0319 13:37:16.097104   11542 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5807,"bootTime":1710874829,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 13:37:16.097190   11542 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 13:37:16.119454   11542 out.go:177] * [force-systemd-env-506000] minikube v1.32.0 on Darwin 14.3.1
	I0319 13:37:16.162393   11542 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 13:37:16.162460   11542 notify.go:220] Checking for updates...
	I0319 13:37:16.205946   11542 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 13:37:16.248151   11542 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 13:37:16.269056   11542 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 13:37:16.290129   11542 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	I0319 13:37:16.311193   11542 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0319 13:37:16.332621   11542 config.go:182] Loaded profile config "offline-docker-947000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:37:16.332725   11542 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 13:37:16.387359   11542 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 13:37:16.387535   11542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:37:16.485780   11542 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:171 SystemTime:2024-03-19 20:37:16.474359739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:37:16.528379   11542 out.go:177] * Using the docker driver based on user configuration
	I0319 13:37:16.549390   11542 start.go:297] selected driver: docker
	I0319 13:37:16.549408   11542 start.go:901] validating driver "docker" against <nil>
	I0319 13:37:16.549423   11542 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 13:37:16.553560   11542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:37:16.653468   11542 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:171 SystemTime:2024-03-19 20:37:16.642617069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:37:16.653645   11542 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 13:37:16.653830   11542 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 13:37:16.675674   11542 out.go:177] * Using Docker Desktop driver with root privileges
	I0319 13:37:16.697736   11542 cni.go:84] Creating CNI manager for ""
	I0319 13:37:16.697782   11542 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0319 13:37:16.697804   11542 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 13:37:16.697913   11542 start.go:340] cluster config:
	{Name:force-systemd-env-506000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 13:37:16.719587   11542 out.go:177] * Starting "force-systemd-env-506000" primary control-plane node in "force-systemd-env-506000" cluster
	I0319 13:37:16.763817   11542 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 13:37:16.785524   11542 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0319 13:37:16.833448   11542 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:37:16.833528   11542 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0319 13:37:16.833532   11542 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 13:37:16.833559   11542 cache.go:56] Caching tarball of preloaded images
	I0319 13:37:16.833784   11542 preload.go:173] Found /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0319 13:37:16.833805   11542 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0319 13:37:16.833953   11542 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/force-systemd-env-506000/config.json ...
	I0319 13:37:16.834757   11542 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/force-systemd-env-506000/config.json: {Name:mkeb932c169d68835da5a3e196aec85d49414692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 13:37:16.898276   11542 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0319 13:37:16.898295   11542 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0319 13:37:16.898329   11542 cache.go:194] Successfully downloaded all kic artifacts
	I0319 13:37:16.898374   11542 start.go:360] acquireMachinesLock for force-systemd-env-506000: {Name:mk1953d3b60b5a0057b262130abcc78fbe27e51c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:37:16.898517   11542 start.go:364] duration metric: took 132.336µs to acquireMachinesLock for "force-systemd-env-506000"
	I0319 13:37:16.898544   11542 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-506000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-506000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0319 13:37:16.898602   11542 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:37:16.942218   11542 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0319 13:37:16.942560   11542 start.go:159] libmachine.API.Create for "force-systemd-env-506000" (driver="docker")
	I0319 13:37:16.942603   11542 client.go:168] LocalClient.Create starting
	I0319 13:37:16.942801   11542 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:37:16.942893   11542 main.go:141] libmachine: Decoding PEM data...
	I0319 13:37:16.942927   11542 main.go:141] libmachine: Parsing certificate...
	I0319 13:37:16.943019   11542 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:37:16.943092   11542 main.go:141] libmachine: Decoding PEM data...
	I0319 13:37:16.943109   11542 main.go:141] libmachine: Parsing certificate...
	I0319 13:37:16.944125   11542 cli_runner.go:164] Run: docker network inspect force-systemd-env-506000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:37:16.994743   11542 cli_runner.go:211] docker network inspect force-systemd-env-506000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:37:16.994849   11542 network_create.go:281] running [docker network inspect force-systemd-env-506000] to gather additional debugging logs...
	I0319 13:37:16.994864   11542 cli_runner.go:164] Run: docker network inspect force-systemd-env-506000
	W0319 13:37:17.044057   11542 cli_runner.go:211] docker network inspect force-systemd-env-506000 returned with exit code 1
	I0319 13:37:17.044093   11542 network_create.go:284] error running [docker network inspect force-systemd-env-506000]: docker network inspect force-systemd-env-506000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-506000 not found
	I0319 13:37:17.044106   11542 network_create.go:286] output of [docker network inspect force-systemd-env-506000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-506000 not found
	
	** /stderr **
	I0319 13:37:17.044223   11542 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:37:17.095510   11542 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:37:17.097026   11542 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:37:17.098662   11542 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:37:17.100097   11542 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:37:17.100457   11542 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002402e70}
	I0319 13:37:17.100474   11542 network_create.go:124] attempt to create docker network force-systemd-env-506000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0319 13:37:17.100553   11542 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-506000 force-systemd-env-506000
	I0319 13:37:17.185590   11542 network_create.go:108] docker network force-systemd-env-506000 192.168.85.0/24 created
	I0319 13:37:17.185633   11542 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-506000" container
	I0319 13:37:17.185732   11542 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:37:17.236092   11542 cli_runner.go:164] Run: docker volume create force-systemd-env-506000 --label name.minikube.sigs.k8s.io=force-systemd-env-506000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:37:17.287196   11542 oci.go:103] Successfully created a docker volume force-systemd-env-506000
	I0319 13:37:17.287322   11542 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-506000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-506000 --entrypoint /usr/bin/test -v force-systemd-env-506000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:37:17.650211   11542 oci.go:107] Successfully prepared a docker volume force-systemd-env-506000
	I0319 13:37:17.650271   11542 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:37:17.650283   11542 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:37:17.650397   11542 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-506000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 13:43:16.943089   11542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:43:16.943240   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:43:16.995724   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:43:16.995853   11542 retry.go:31] will retry after 335.167924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:17.331517   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:43:17.384111   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:43:17.384222   11542 retry.go:31] will retry after 396.09395ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:17.780925   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:43:17.835191   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:43:17.835301   11542 retry.go:31] will retry after 621.896777ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:18.459701   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:43:18.510585   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	W0319 13:43:18.510691   11542 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	
	W0319 13:43:18.510716   11542 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:18.510769   11542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:43:18.510824   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:43:18.560029   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:43:18.560119   11542 retry.go:31] will retry after 283.531277ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:18.845985   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:43:18.896873   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:43:18.896965   11542 retry.go:31] will retry after 200.8277ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:19.098861   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:43:19.149663   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:43:19.149749   11542 retry.go:31] will retry after 664.94128ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:19.815608   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:43:19.868956   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:43:19.869057   11542 retry.go:31] will retry after 609.963541ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:20.479479   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:43:20.530442   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	W0319 13:43:20.530542   11542 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	
	W0319 13:43:20.530558   11542 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:20.530577   11542 start.go:128] duration metric: took 6m3.632743767s to createHost
	I0319 13:43:20.530586   11542 start.go:83] releasing machines lock for "force-systemd-env-506000", held for 6m3.632850196s
	W0319 13:43:20.530600   11542 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0319 13:43:20.531013   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:20.580374   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:20.580437   11542 delete.go:82] Unable to get host status for force-systemd-env-506000, assuming it has already been deleted: state: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	W0319 13:43:20.580529   11542 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0319 13:43:20.580538   11542 start.go:728] Will try again in 5 seconds ...
	I0319 13:43:25.582749   11542 start.go:360] acquireMachinesLock for force-systemd-env-506000: {Name:mk1953d3b60b5a0057b262130abcc78fbe27e51c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:43:25.583765   11542 start.go:364] duration metric: took 176.332µs to acquireMachinesLock for "force-systemd-env-506000"
	I0319 13:43:25.583817   11542 start.go:96] Skipping create...Using existing machine configuration
	I0319 13:43:25.583833   11542 fix.go:54] fixHost starting: 
	I0319 13:43:25.584364   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:25.634992   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:25.635037   11542 fix.go:112] recreateIfNeeded on force-systemd-env-506000: state= err=unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:25.635053   11542 fix.go:117] machineExists: false. err=machine does not exist
	I0319 13:43:25.656148   11542 out.go:177] * docker "force-systemd-env-506000" container is missing, will recreate.
	I0319 13:43:25.701797   11542 delete.go:124] DEMOLISHING force-systemd-env-506000 ...
	I0319 13:43:25.701998   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:25.752331   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	W0319 13:43:25.752386   11542 stop.go:83] unable to get state: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:25.752410   11542 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:25.752778   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:25.802065   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:25.802119   11542 delete.go:82] Unable to get host status for force-systemd-env-506000, assuming it has already been deleted: state: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:25.802215   11542 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-506000
	W0319 13:43:25.851406   11542 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-506000 returned with exit code 1
	I0319 13:43:25.851446   11542 kic.go:371] could not find the container force-systemd-env-506000 to remove it. will try anyways
	I0319 13:43:25.851529   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:25.900557   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	W0319 13:43:25.900603   11542 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:25.900688   11542 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-506000 /bin/bash -c "sudo init 0"
	W0319 13:43:25.949564   11542 cli_runner.go:211] docker exec --privileged -t force-systemd-env-506000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0319 13:43:25.949603   11542 oci.go:650] error shutdown force-systemd-env-506000: docker exec --privileged -t force-systemd-env-506000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:26.950792   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:27.002753   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:27.002810   11542 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:27.002822   11542 oci.go:664] temporary error: container force-systemd-env-506000 status is  but expect it to be exited
	I0319 13:43:27.002848   11542 retry.go:31] will retry after 504.688967ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:27.507986   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:27.561309   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:27.561366   11542 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:27.561375   11542 oci.go:664] temporary error: container force-systemd-env-506000 status is  but expect it to be exited
	I0319 13:43:27.561401   11542 retry.go:31] will retry after 823.971293ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:28.386327   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:28.438172   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:28.438235   11542 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:28.438247   11542 oci.go:664] temporary error: container force-systemd-env-506000 status is  but expect it to be exited
	I0319 13:43:28.438271   11542 retry.go:31] will retry after 664.115143ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:29.104712   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:29.158560   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:29.158609   11542 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:29.158624   11542 oci.go:664] temporary error: container force-systemd-env-506000 status is  but expect it to be exited
	I0319 13:43:29.158652   11542 retry.go:31] will retry after 2.178443788s: couldn't verify container is exited. %v: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:31.339191   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:31.391927   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:31.391980   11542 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:31.391994   11542 oci.go:664] temporary error: container force-systemd-env-506000 status is  but expect it to be exited
	I0319 13:43:31.392019   11542 retry.go:31] will retry after 2.703828564s: couldn't verify container is exited. %v: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:34.096415   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:34.149785   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:34.149840   11542 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:34.149853   11542 oci.go:664] temporary error: container force-systemd-env-506000 status is  but expect it to be exited
	I0319 13:43:34.149878   11542 retry.go:31] will retry after 5.510291432s: couldn't verify container is exited. %v: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:39.662434   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:39.715733   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:39.715797   11542 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:39.715808   11542 oci.go:664] temporary error: container force-systemd-env-506000 status is  but expect it to be exited
	I0319 13:43:39.715831   11542 retry.go:31] will retry after 5.589925113s: couldn't verify container is exited. %v: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:45.307632   11542 cli_runner.go:164] Run: docker container inspect force-systemd-env-506000 --format={{.State.Status}}
	W0319 13:43:45.365568   11542 cli_runner.go:211] docker container inspect force-systemd-env-506000 --format={{.State.Status}} returned with exit code 1
	I0319 13:43:45.365621   11542 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:43:45.365632   11542 oci.go:664] temporary error: container force-systemd-env-506000 status is  but expect it to be exited
	I0319 13:43:45.365664   11542 oci.go:88] couldn't shut down force-systemd-env-506000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	 
	I0319 13:43:45.365747   11542 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-506000
	I0319 13:43:45.421311   11542 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-506000
	W0319 13:43:45.473237   11542 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-506000 returned with exit code 1
	I0319 13:43:45.473349   11542 cli_runner.go:164] Run: docker network inspect force-systemd-env-506000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:43:45.522585   11542 cli_runner.go:164] Run: docker network rm force-systemd-env-506000
	I0319 13:43:45.630146   11542 fix.go:124] Sleeping 1 second for extra luck!
	I0319 13:43:46.632386   11542 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:43:46.654382   11542 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0319 13:43:46.654566   11542 start.go:159] libmachine.API.Create for "force-systemd-env-506000" (driver="docker")
	I0319 13:43:46.654593   11542 client.go:168] LocalClient.Create starting
	I0319 13:43:46.654832   11542 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:43:46.654933   11542 main.go:141] libmachine: Decoding PEM data...
	I0319 13:43:46.654958   11542 main.go:141] libmachine: Parsing certificate...
	I0319 13:43:46.655034   11542 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:43:46.655103   11542 main.go:141] libmachine: Decoding PEM data...
	I0319 13:43:46.655118   11542 main.go:141] libmachine: Parsing certificate...
	I0319 13:43:46.677354   11542 cli_runner.go:164] Run: docker network inspect force-systemd-env-506000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:43:46.729540   11542 cli_runner.go:211] docker network inspect force-systemd-env-506000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:43:46.729639   11542 network_create.go:281] running [docker network inspect force-systemd-env-506000] to gather additional debugging logs...
	I0319 13:43:46.729659   11542 cli_runner.go:164] Run: docker network inspect force-systemd-env-506000
	W0319 13:43:46.778260   11542 cli_runner.go:211] docker network inspect force-systemd-env-506000 returned with exit code 1
	I0319 13:43:46.778293   11542 network_create.go:284] error running [docker network inspect force-systemd-env-506000]: docker network inspect force-systemd-env-506000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-506000 not found
	I0319 13:43:46.778305   11542 network_create.go:286] output of [docker network inspect force-systemd-env-506000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-506000 not found
	
	** /stderr **
	I0319 13:43:46.778422   11542 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:43:46.829835   11542 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:43:46.831481   11542 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:43:46.832947   11542 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:43:46.834482   11542 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:43:46.836088   11542 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:43:46.837692   11542 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:43:46.838095   11542 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00211d2d0}
	I0319 13:43:46.838107   11542 network_create.go:124] attempt to create docker network force-systemd-env-506000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0319 13:43:46.838174   11542 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-506000 force-systemd-env-506000
	I0319 13:43:46.923294   11542 network_create.go:108] docker network force-systemd-env-506000 192.168.103.0/24 created
	I0319 13:43:46.923334   11542 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-506000" container
	I0319 13:43:46.923452   11542 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:43:46.974756   11542 cli_runner.go:164] Run: docker volume create force-systemd-env-506000 --label name.minikube.sigs.k8s.io=force-systemd-env-506000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:43:47.025221   11542 oci.go:103] Successfully created a docker volume force-systemd-env-506000
	I0319 13:43:47.025347   11542 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-506000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-506000 --entrypoint /usr/bin/test -v force-systemd-env-506000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:43:47.311612   11542 oci.go:107] Successfully prepared a docker volume force-systemd-env-506000
	I0319 13:43:47.311646   11542 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:43:47.311659   11542 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:43:47.311775   11542 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-506000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 13:49:46.653721   11542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:49:46.653868   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:46.706239   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:46.706354   11542 retry.go:31] will retry after 270.020446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:46.977000   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:47.030119   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:47.030232   11542 retry.go:31] will retry after 209.690996ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:47.240695   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:47.293498   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:47.293616   11542 retry.go:31] will retry after 752.173921ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:48.046465   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:48.099766   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:48.099864   11542 retry.go:31] will retry after 467.231562ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:48.569544   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:48.619868   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	W0319 13:49:48.619975   11542 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	
	W0319 13:49:48.619998   11542 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:48.620053   11542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:49:48.620112   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:48.669060   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:48.669150   11542 retry.go:31] will retry after 273.46045ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:48.945024   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:48.995892   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:48.995984   11542 retry.go:31] will retry after 524.223643ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:49.522694   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:49.575570   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:49.575664   11542 retry.go:31] will retry after 578.733723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:50.155511   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:50.208795   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	W0319 13:49:50.208901   11542 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	
	W0319 13:49:50.208922   11542 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:50.208942   11542 start.go:128] duration metric: took 6m3.577624223s to createHost
	I0319 13:49:50.209005   11542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:49:50.209062   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:50.257363   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:50.257454   11542 retry.go:31] will retry after 179.725147ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:50.438121   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:50.491160   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:50.491253   11542 retry.go:31] will retry after 208.020798ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:50.701682   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:50.754486   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:50.754582   11542 retry.go:31] will retry after 590.878202ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:51.347826   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:51.400860   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:51.400953   11542 retry.go:31] will retry after 677.852488ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:52.079684   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:52.130913   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	W0319 13:49:52.131012   11542 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	
	W0319 13:49:52.131025   11542 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:52.131094   11542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:49:52.131151   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:52.182435   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:52.182525   11542 retry.go:31] will retry after 307.225523ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:52.491326   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:52.542781   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:52.542873   11542 retry.go:31] will retry after 471.769893ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:53.017126   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:53.069986   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	I0319 13:49:53.070083   11542 retry.go:31] will retry after 570.896715ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:53.641998   11542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000
	W0319 13:49:53.693433   11542 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000 returned with exit code 1
	W0319 13:49:53.693531   11542 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	
	W0319 13:49:53.693544   11542 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-506000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-506000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	I0319 13:49:53.693560   11542 fix.go:56] duration metric: took 6m28.110903949s for fixHost
	I0319 13:49:53.693567   11542 start.go:83] releasing machines lock for "force-systemd-env-506000", held for 6m28.110950921s
	W0319 13:49:53.693646   11542 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-506000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-506000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0319 13:49:53.753373   11542 out.go:177] 
	W0319 13:49:53.775265   11542 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0319 13:49:53.775296   11542 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0319 13:49:53.775337   11542 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0319 13:49:53.796177   11542 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-506000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-506000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-506000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (201.679177ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-506000 host status: state: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-506000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-19 13:49:54.072589 -0700 PDT m=+6344.508833215
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-506000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-506000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-506000",
	        "Id": "7bc1ff0c7ea66f5e6ff35e1fa20e688c1271cb7687d91ce927b418bdfc4b478f",
	        "Created": "2024-03-19T20:43:46.884761797Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-506000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-506000 -n force-systemd-env-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-506000 -n force-systemd-env-506000: exit status 7 (113.881093ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:49:54.238976   12022 status.go:249] status error: host: state: unknown state "force-systemd-env-506000": docker container inspect force-systemd-env-506000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-506000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-506000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-506000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-506000
--- FAIL: TestForceSystemdEnv (758.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (892.04s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-432000 ssh -- ls /minikube-host
E0319 12:34:59.486340    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:35:25.748614    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:36:48.791379    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:39:59.478158    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:40:25.739209    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:44:59.471040    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:45:25.733294    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-432000 ssh -- ls /minikube-host: signal: killed (14m51.591598261s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-1-432000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-432000
helpers_test.go:235: (dbg) docker inspect mount-start-1-432000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f2d9cf2406dd718fc30370ca9467f166ed41d46d4b7c371d7cb8e20d47ed5ba",
	        "Created": "2024-03-19T19:32:47.599333416Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 166119,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-19T19:32:47.810856778Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:824841ec881aeec3697aa896b6eaaaed4a34726d2ba99ff4b9ca0b12f150022e",
	        "ResolvConfPath": "/var/lib/docker/containers/7f2d9cf2406dd718fc30370ca9467f166ed41d46d4b7c371d7cb8e20d47ed5ba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f2d9cf2406dd718fc30370ca9467f166ed41d46d4b7c371d7cb8e20d47ed5ba/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f2d9cf2406dd718fc30370ca9467f166ed41d46d4b7c371d7cb8e20d47ed5ba/hosts",
	        "LogPath": "/var/lib/docker/containers/7f2d9cf2406dd718fc30370ca9467f166ed41d46d4b7c371d7cb8e20d47ed5ba/7f2d9cf2406dd718fc30370ca9467f166ed41d46d4b7c371d7cb8e20d47ed5ba-json.log",
	        "Name": "/mount-start-1-432000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-1-432000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-1-432000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1e4814a65f640a94ab6dfc55aaf29d6565b5180a0ae16bef139c4c5bf980bb7f-init/diff:/var/lib/docker/overlay2/8d3a908f316c716b7f312caec5c692ce2d5f9856d66198ac264ba6fcb248a810/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e4814a65f640a94ab6dfc55aaf29d6565b5180a0ae16bef139c4c5bf980bb7f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e4814a65f640a94ab6dfc55aaf29d6565b5180a0ae16bef139c4c5bf980bb7f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e4814a65f640a94ab6dfc55aaf29d6565b5180a0ae16bef139c4c5bf980bb7f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-1-432000",
	                "Source": "/var/lib/docker/volumes/mount-start-1-432000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-1-432000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-1-432000",
	                "name.minikube.sigs.k8s.io": "mount-start-1-432000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "62c1d1fd1db4d0a1fce6373681ef6edf37d1975a7c53bbd227dfa51b976ffed1",
	            "SandboxKey": "/var/run/docker/netns/62c1d1fd1db4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51642"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51643"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51644"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51645"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51646"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-1-432000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7f2d9cf2406d",
	                        "mount-start-1-432000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "8ac2420418f894b83d09bd1631b9aba0cd47f6f39d2fe0fd6f9c7aa5ea79cc9c",
	                    "EndpointID": "9372f638bb5954cb41599c811f0c68c1d4945c198e1407f8960e7ddd93a658f6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-1-432000",
	                        "7f2d9cf2406d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-432000 -n mount-start-1-432000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-432000 -n mount-start-1-432000: exit status 6 (395.173283ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 12:47:46.061199    9058 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-1-432000" does not appear in /Users/jenkins/minikube-integration/18453-925/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-1-432000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountFirst (892.04s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (750.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-472000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0319 12:49:59.594935    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:50:25.856868    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:53:28.904097    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:54:59.596583    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:55:25.860118    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:59:59.598400    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:00:25.862729    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-472000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m30.72929897s)

                                                
                                                
-- stdout --
	* [multinode-472000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-472000" primary control-plane node in "multinode-472000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-472000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 12:48:55.313488    9171 out.go:291] Setting OutFile to fd 1 ...
	I0319 12:48:55.314360    9171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:48:55.314369    9171 out.go:304] Setting ErrFile to fd 2...
	I0319 12:48:55.314375    9171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:48:55.315099    9171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 12:48:55.317641    9171 out.go:298] Setting JSON to false
	I0319 12:48:55.344988    9171 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2906,"bootTime":1710874829,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 12:48:55.345176    9171 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 12:48:55.368064    9171 out.go:177] * [multinode-472000] minikube v1.32.0 on Darwin 14.3.1
	I0319 12:48:55.432595    9171 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 12:48:55.410747    9171 notify.go:220] Checking for updates...
	I0319 12:48:55.474781    9171 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 12:48:55.496774    9171 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 12:48:55.517590    9171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 12:48:55.538909    9171 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	I0319 12:48:55.561923    9171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 12:48:55.583767    9171 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 12:48:55.639186    9171 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 12:48:55.639367    9171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:48:55.741354    9171 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:81 SystemTime:2024-03-19 19:48:55.729701333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:48:55.765824    9171 out.go:177] * Using the docker driver based on user configuration
	I0319 12:48:55.807614    9171 start.go:297] selected driver: docker
	I0319 12:48:55.807629    9171 start.go:901] validating driver "docker" against <nil>
	I0319 12:48:55.807637    9171 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 12:48:55.810608    9171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:48:55.910951    9171 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:81 SystemTime:2024-03-19 19:48:55.90023759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cg
roupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for
an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:48:55.911137    9171 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 12:48:55.911323    9171 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 12:48:55.933321    9171 out.go:177] * Using Docker Desktop driver with root privileges
	I0319 12:48:55.955033    9171 cni.go:84] Creating CNI manager for ""
	I0319 12:48:55.955065    9171 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0319 12:48:55.955077    9171 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0319 12:48:55.955185    9171 start.go:340] cluster config:
	{Name:multinode-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 12:48:55.977128    9171 out.go:177] * Starting "multinode-472000" primary control-plane node in "multinode-472000" cluster
	I0319 12:48:56.021059    9171 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 12:48:56.041970    9171 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0319 12:48:56.085007    9171 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 12:48:56.085052    9171 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 12:48:56.085083    9171 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0319 12:48:56.085103    9171 cache.go:56] Caching tarball of preloaded images
	I0319 12:48:56.085324    9171 preload.go:173] Found /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0319 12:48:56.085347    9171 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0319 12:48:56.087112    9171 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/multinode-472000/config.json ...
	I0319 12:48:56.087202    9171 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/multinode-472000/config.json: {Name:mka40a6847b27a4d724417d2288a26afd0a156e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 12:48:56.136307    9171 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0319 12:48:56.136339    9171 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0319 12:48:56.136371    9171 cache.go:194] Successfully downloaded all kic artifacts
	I0319 12:48:56.136422    9171 start.go:360] acquireMachinesLock for multinode-472000: {Name:mk0f09b10168214c476d3d2276b0688fe6ad0b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 12:48:56.136587    9171 start.go:364] duration metric: took 154.105µs to acquireMachinesLock for "multinode-472000"
	I0319 12:48:56.136611    9171 start.go:93] Provisioning new machine with config: &{Name:multinode-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-472000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0319 12:48:56.136682    9171 start.go:125] createHost starting for "" (driver="docker")
	I0319 12:48:56.179071    9171 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0319 12:48:56.179423    9171 start.go:159] libmachine.API.Create for "multinode-472000" (driver="docker")
	I0319 12:48:56.179478    9171 client.go:168] LocalClient.Create starting
	I0319 12:48:56.179653    9171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 12:48:56.179749    9171 main.go:141] libmachine: Decoding PEM data...
	I0319 12:48:56.179783    9171 main.go:141] libmachine: Parsing certificate...
	I0319 12:48:56.179894    9171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 12:48:56.179965    9171 main.go:141] libmachine: Decoding PEM data...
	I0319 12:48:56.179981    9171 main.go:141] libmachine: Parsing certificate...
	I0319 12:48:56.180946    9171 cli_runner.go:164] Run: docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 12:48:56.231067    9171 cli_runner.go:211] docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 12:48:56.231167    9171 network_create.go:281] running [docker network inspect multinode-472000] to gather additional debugging logs...
	I0319 12:48:56.231184    9171 cli_runner.go:164] Run: docker network inspect multinode-472000
	W0319 12:48:56.281320    9171 cli_runner.go:211] docker network inspect multinode-472000 returned with exit code 1
	I0319 12:48:56.281353    9171 network_create.go:284] error running [docker network inspect multinode-472000]: docker network inspect multinode-472000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-472000 not found
	I0319 12:48:56.281363    9171 network_create.go:286] output of [docker network inspect multinode-472000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-472000 not found
	
	** /stderr **
	I0319 12:48:56.281484    9171 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 12:48:56.332299    9171 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 12:48:56.333953    9171 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 12:48:56.334336    9171 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024046a0}
	I0319 12:48:56.334353    9171 network_create.go:124] attempt to create docker network multinode-472000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0319 12:48:56.334428    9171 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000
	W0319 12:48:56.384268    9171 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000 returned with exit code 1
	W0319 12:48:56.384309    9171 network_create.go:149] failed to create docker network multinode-472000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0319 12:48:56.384330    9171 network_create.go:116] failed to create docker network multinode-472000 192.168.67.0/24, will retry: subnet is taken
	I0319 12:48:56.385768    9171 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 12:48:56.386131    9171 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022d3a60}
	I0319 12:48:56.386143    9171 network_create.go:124] attempt to create docker network multinode-472000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0319 12:48:56.386216    9171 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000
	I0319 12:48:56.470883    9171 network_create.go:108] docker network multinode-472000 192.168.76.0/24 created
	I0319 12:48:56.470919    9171 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-472000" container
	I0319 12:48:56.471036    9171 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 12:48:56.520647    9171 cli_runner.go:164] Run: docker volume create multinode-472000 --label name.minikube.sigs.k8s.io=multinode-472000 --label created_by.minikube.sigs.k8s.io=true
	I0319 12:48:56.571188    9171 oci.go:103] Successfully created a docker volume multinode-472000
	I0319 12:48:56.571296    9171 cli_runner.go:164] Run: docker run --rm --name multinode-472000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-472000 --entrypoint /usr/bin/test -v multinode-472000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 12:48:56.937327    9171 oci.go:107] Successfully prepared a docker volume multinode-472000
	I0319 12:48:56.937363    9171 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 12:48:56.937375    9171 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 12:48:56.937477    9171 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-472000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 12:54:56.183420    9171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 12:54:56.183568    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 12:54:56.235209    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 12:54:56.235344    9171 retry.go:31] will retry after 199.124901ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:54:56.436813    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 12:54:56.490248    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 12:54:56.490349    9171 retry.go:31] will retry after 291.111169ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:54:56.781967    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 12:54:56.833769    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 12:54:56.833866    9171 retry.go:31] will retry after 747.727342ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:54:57.582521    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 12:54:57.633460    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 12:54:57.633588    9171 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 12:54:57.633608    9171 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:54:57.633663    9171 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 12:54:57.633729    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 12:54:57.683232    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 12:54:57.683327    9171 retry.go:31] will retry after 164.231774ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:54:57.848015    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 12:54:57.901180    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 12:54:57.901277    9171 retry.go:31] will retry after 417.472199ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:54:58.321172    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 12:54:58.373514    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 12:54:58.373620    9171 retry.go:31] will retry after 739.511802ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:54:59.113822    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 12:54:59.165750    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 12:54:59.165857    9171 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 12:54:59.165874    9171 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:54:59.165889    9171 start.go:128] duration metric: took 6m3.026342003s to createHost
	I0319 12:54:59.165895    9171 start.go:83] releasing machines lock for "multinode-472000", held for 6m3.026449694s
	W0319 12:54:59.165909    9171 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0319 12:54:59.166319    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:54:59.221038    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:54:59.221089    9171 delete.go:82] Unable to get host status for multinode-472000, assuming it has already been deleted: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	W0319 12:54:59.221159    9171 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0319 12:54:59.221170    9171 start.go:728] Will try again in 5 seconds ...
	I0319 12:55:04.221417    9171 start.go:360] acquireMachinesLock for multinode-472000: {Name:mk0f09b10168214c476d3d2276b0688fe6ad0b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 12:55:04.222264    9171 start.go:364] duration metric: took 774.339µs to acquireMachinesLock for "multinode-472000"
	I0319 12:55:04.222489    9171 start.go:96] Skipping create...Using existing machine configuration
	I0319 12:55:04.222514    9171 fix.go:54] fixHost starting: 
	I0319 12:55:04.223188    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:04.275516    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:04.275559    9171 fix.go:112] recreateIfNeeded on multinode-472000: state= err=unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:04.275578    9171 fix.go:117] machineExists: false. err=machine does not exist
	I0319 12:55:04.297457    9171 out.go:177] * docker "multinode-472000" container is missing, will recreate.
	I0319 12:55:04.319061    9171 delete.go:124] DEMOLISHING multinode-472000 ...
	I0319 12:55:04.319244    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:04.369841    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	W0319 12:55:04.369903    9171 stop.go:83] unable to get state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:04.369932    9171 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:04.370303    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:04.419622    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:04.419670    9171 delete.go:82] Unable to get host status for multinode-472000, assuming it has already been deleted: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:04.419750    9171 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-472000
	W0319 12:55:04.468392    9171 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-472000 returned with exit code 1
	I0319 12:55:04.468447    9171 kic.go:371] could not find the container multinode-472000 to remove it. will try anyways
	I0319 12:55:04.468525    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:04.518257    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	W0319 12:55:04.518301    9171 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:04.518381    9171 cli_runner.go:164] Run: docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0"
	W0319 12:55:04.567664    9171 cli_runner.go:211] docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0319 12:55:04.567705    9171 oci.go:650] error shutdown multinode-472000: docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:05.568411    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:05.621251    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:05.621296    9171 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:05.621307    9171 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 12:55:05.621332    9171 retry.go:31] will retry after 555.149615ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:06.176934    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:06.227011    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:06.227064    9171 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:06.227077    9171 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 12:55:06.227100    9171 retry.go:31] will retry after 539.596971ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:06.767356    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:06.819693    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:06.819741    9171 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:06.819751    9171 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 12:55:06.819777    9171 retry.go:31] will retry after 909.782986ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:07.731064    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:07.784024    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:07.784071    9171 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:07.784090    9171 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 12:55:07.784113    9171 retry.go:31] will retry after 1.902649168s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:09.687512    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:09.739050    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:09.739098    9171 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:09.739109    9171 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 12:55:09.739137    9171 retry.go:31] will retry after 2.843718582s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:12.583186    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:12.633954    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:12.634003    9171 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:12.634018    9171 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 12:55:12.634040    9171 retry.go:31] will retry after 1.922837104s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:14.559288    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:14.611718    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:14.611763    9171 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:14.611771    9171 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 12:55:14.611794    9171 retry.go:31] will retry after 3.796722515s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:18.409224    9171 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 12:55:18.462519    9171 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 12:55:18.462561    9171 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 12:55:18.462573    9171 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 12:55:18.462604    9171 oci.go:88] couldn't shut down multinode-472000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	 
	I0319 12:55:18.462675    9171 cli_runner.go:164] Run: docker rm -f -v multinode-472000
	I0319 12:55:18.513599    9171 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-472000
	W0319 12:55:18.563058    9171 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-472000 returned with exit code 1
	I0319 12:55:18.563179    9171 cli_runner.go:164] Run: docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 12:55:18.611673    9171 cli_runner.go:164] Run: docker network rm multinode-472000
	I0319 12:55:18.738208    9171 fix.go:124] Sleeping 1 second for extra luck!
	I0319 12:55:19.739737    9171 start.go:125] createHost starting for "" (driver="docker")
	I0319 12:55:19.763042    9171 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0319 12:55:19.763230    9171 start.go:159] libmachine.API.Create for "multinode-472000" (driver="docker")
	I0319 12:55:19.763262    9171 client.go:168] LocalClient.Create starting
	I0319 12:55:19.763468    9171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 12:55:19.763579    9171 main.go:141] libmachine: Decoding PEM data...
	I0319 12:55:19.763606    9171 main.go:141] libmachine: Parsing certificate...
	I0319 12:55:19.763738    9171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 12:55:19.763832    9171 main.go:141] libmachine: Decoding PEM data...
	I0319 12:55:19.763850    9171 main.go:141] libmachine: Parsing certificate...
	I0319 12:55:19.764845    9171 cli_runner.go:164] Run: docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 12:55:19.817010    9171 cli_runner.go:211] docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 12:55:19.817100    9171 network_create.go:281] running [docker network inspect multinode-472000] to gather additional debugging logs...
	I0319 12:55:19.817126    9171 cli_runner.go:164] Run: docker network inspect multinode-472000
	W0319 12:55:19.867189    9171 cli_runner.go:211] docker network inspect multinode-472000 returned with exit code 1
	I0319 12:55:19.867214    9171 network_create.go:284] error running [docker network inspect multinode-472000]: docker network inspect multinode-472000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-472000 not found
	I0319 12:55:19.867233    9171 network_create.go:286] output of [docker network inspect multinode-472000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-472000 not found
	
	** /stderr **
	I0319 12:55:19.867364    9171 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 12:55:19.919088    9171 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 12:55:19.920659    9171 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 12:55:19.922202    9171 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 12:55:19.923734    9171 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 12:55:19.924149    9171 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013450}
	I0319 12:55:19.924161    9171 network_create.go:124] attempt to create docker network multinode-472000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0319 12:55:19.924235    9171 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000
	I0319 12:55:20.009871    9171 network_create.go:108] docker network multinode-472000 192.168.85.0/24 created
	I0319 12:55:20.009902    9171 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-472000" container
	I0319 12:55:20.010006    9171 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 12:55:20.059370    9171 cli_runner.go:164] Run: docker volume create multinode-472000 --label name.minikube.sigs.k8s.io=multinode-472000 --label created_by.minikube.sigs.k8s.io=true
	I0319 12:55:20.110205    9171 oci.go:103] Successfully created a docker volume multinode-472000
	I0319 12:55:20.110334    9171 cli_runner.go:164] Run: docker run --rm --name multinode-472000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-472000 --entrypoint /usr/bin/test -v multinode-472000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 12:55:20.431838    9171 oci.go:107] Successfully prepared a docker volume multinode-472000
	I0319 12:55:20.431867    9171 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 12:55:20.431880    9171 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 12:55:20.431982    9171 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-472000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 13:01:19.766631    9171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:01:19.766767    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:19.819559    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:19.819672    9171 retry.go:31] will retry after 176.670144ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:19.996684    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:20.048462    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:20.048555    9171 retry.go:31] will retry after 550.051719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:20.601072    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:20.652848    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:20.652962    9171 retry.go:31] will retry after 677.562395ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:21.331522    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:21.384624    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:01:21.384731    9171 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:01:21.384750    9171 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:21.384809    9171 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:01:21.384868    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:21.434905    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:21.435010    9171 retry.go:31] will retry after 165.285004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:21.602673    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:21.657402    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:21.657496    9171 retry.go:31] will retry after 532.989611ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:22.192261    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:22.244315    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:22.244419    9171 retry.go:31] will retry after 808.312175ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:23.055119    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:23.107380    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:01:23.107493    9171 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:01:23.107509    9171 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:23.107524    9171 start.go:128] duration metric: took 6m3.364898793s to createHost
	I0319 13:01:23.107594    9171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:01:23.107646    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:23.156760    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:23.156853    9171 retry.go:31] will retry after 147.984049ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:23.307224    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:23.360670    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:23.360776    9171 retry.go:31] will retry after 483.762808ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:23.845813    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:23.898884    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:23.898981    9171 retry.go:31] will retry after 396.244286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:24.296113    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:24.349061    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:01:24.349162    9171 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:01:24.349186    9171 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:24.349246    9171 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:01:24.349304    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:24.400243    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:24.400349    9171 retry.go:31] will retry after 222.230565ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:24.623715    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:24.677065    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:24.677157    9171 retry.go:31] will retry after 224.994337ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:24.904561    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:24.955893    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:01:24.955988    9171 retry.go:31] will retry after 819.558134ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:25.776322    9171 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:01:25.828805    9171 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:01:25.828904    9171 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:01:25.828920    9171 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:01:25.828929    9171 fix.go:56] duration metric: took 6m21.6034091s for fixHost
	I0319 13:01:25.828935    9171 start.go:83] releasing machines lock for "multinode-472000", held for 6m21.603494139s
	W0319 13:01:25.829021    9171 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-472000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-472000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0319 13:01:25.872599    9171 out.go:177] 
	W0319 13:01:25.894361    9171 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0319 13:01:25.894417    9171 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0319 13:01:25.894455    9171 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0319 13:01:25.915386    9171 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-472000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "320bdb5d70e70c7ae81aeec68fe9db7403b5867010e14ced1d0a0d1c9a0809d4",
	        "Created": "2024-03-19T19:55:19.97078513Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (115.286868ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:01:26.159611    9598 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (750.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (102.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (101.406494ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-472000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- rollout status deployment/busybox: exit status 1 (102.078949ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.323465ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.975897ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.175415ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.477351ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.942423ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.138666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.000611ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.90841ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.500457ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (146.980498ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.473669ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (101.208742ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- exec  -- nslookup kubernetes.io: exit status 1 (100.867556ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- exec  -- nslookup kubernetes.default: exit status 1 (100.49397ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (100.66226ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "320bdb5d70e70c7ae81aeec68fe9db7403b5867010e14ced1d0a0d1c9a0809d4",
	        "Created": "2024-03-19T19:55:19.97078513Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (114.639961ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:03:08.599158    9688 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (102.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-472000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.873699ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-472000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "320bdb5d70e70c7ae81aeec68fe9db7403b5867010e14ced1d0a0d1c9a0809d4",
	        "Created": "2024-03-19T19:55:19.97078513Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (113.600819ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:03:08.866925    9697 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-472000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-472000 -v 3 --alsologtostderr: exit status 80 (203.336026ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:08.929940    9701 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:08.930793    9701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:08.930799    9701 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:08.930803    9701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:08.930976    9701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:08.931310    9701 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:08.931580    9701 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:08.931953    9701 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:08.981245    9701 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:09.003993    9701 out.go:177] 
	W0319 13:03:09.025687    9701 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-472000 host status: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-472000 host status: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	I0319 13:03:09.047313    9701 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-472000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "320bdb5d70e70c7ae81aeec68fe9db7403b5867010e14ced1d0a0d1c9a0809d4",
	        "Created": "2024-03-19T19:55:19.97078513Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (113.360295ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:03:09.238362    9707 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-472000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-472000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (37.6982ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-472000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-472000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-472000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "320bdb5d70e70c7ae81aeec68fe9db7403b5867010e14ced1d0a0d1c9a0809d4",
	        "Created": "2024-03-19T19:55:19.97078513Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (114.60502ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:03:09.443688    9714 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-472000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-1-432000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-472000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-472000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"A
PIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-472000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.
3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":600000000
00},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "320bdb5d70e70c7ae81aeec68fe9db7403b5867010e14ced1d0a0d1c9a0809d4",
	        "Created": "2024-03-19T19:55:19.97078513Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (113.297917ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:03:09.797330    9726 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status --output json --alsologtostderr: exit status 7 (112.747259ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-472000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:09.859872    9730 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:09.860050    9730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:09.860055    9730 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:09.860059    9730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:09.860230    9730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:09.860422    9730 out.go:298] Setting JSON to true
	I0319 13:03:09.860444    9730 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:09.860480    9730 notify.go:220] Checking for updates...
	I0319 13:03:09.860748    9730 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:09.860764    9730 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:09.861156    9730 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:09.910091    9730 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:09.910152    9730 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:09.910187    9730 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:09.910216    9730 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:09.910224    9730 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-472000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "320bdb5d70e70c7ae81aeec68fe9db7403b5867010e14ced1d0a0d1c9a0809d4",
	        "Created": "2024-03-19T19:55:19.97078513Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (114.679582ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:03:10.078488    9736 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 node stop m03: exit status 85 (155.673497ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-472000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status: exit status 7 (114.506092ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:03:10.349399    9742 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:10.349411    9742 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr: exit status 7 (113.666738ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:10.411936    9746 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:10.412108    9746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:10.412114    9746 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:10.412118    9746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:10.412284    9746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:10.412463    9746 out.go:298] Setting JSON to false
	I0319 13:03:10.412487    9746 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:10.412520    9746 notify.go:220] Checking for updates...
	I0319 13:03:10.412748    9746 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:10.412764    9746 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:10.413135    9746 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:10.463029    9746 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:10.463108    9746 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:10.463127    9746 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:10.463152    9746 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:10.463159    9746 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr": multinode-472000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr": multinode-472000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr": multinode-472000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "320bdb5d70e70c7ae81aeec68fe9db7403b5867010e14ced1d0a0d1c9a0809d4",
	        "Created": "2024-03-19T19:55:19.97078513Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (113.736557ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:03:10.629797    9752 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 node start m03 -v=7 --alsologtostderr: exit status 85 (156.08297ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:10.692706    9756 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:10.693077    9756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:10.693083    9756 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:10.693087    9756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:10.693267    9756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:10.693634    9756 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:10.693907    9756 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:10.715283    9756 out.go:177] 
	W0319 13:03:10.737121    9756 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0319 13:03:10.737146    9756 out.go:239] * 
	* 
	W0319 13:03:10.741364    9756 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0319 13:03:10.762864    9756 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0319 13:03:10.692706    9756 out.go:291] Setting OutFile to fd 1 ...
I0319 13:03:10.693077    9756 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 13:03:10.693083    9756 out.go:304] Setting ErrFile to fd 2...
I0319 13:03:10.693087    9756 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 13:03:10.693267    9756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
I0319 13:03:10.693634    9756 mustload.go:65] Loading cluster: multinode-472000
I0319 13:03:10.693907    9756 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 13:03:10.715283    9756 out.go:177] 
W0319 13:03:10.737121    9756 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0319 13:03:10.737146    9756 out.go:239] * 
* 
W0319 13:03:10.741364    9756 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0319 13:03:10.762864    9756 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-472000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr: exit status 7 (113.995746ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:10.849238    9758 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:10.849433    9758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:10.849439    9758 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:10.849442    9758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:10.849629    9758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:10.849799    9758 out.go:298] Setting JSON to false
	I0319 13:03:10.849820    9758 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:10.849859    9758 notify.go:220] Checking for updates...
	I0319 13:03:10.850084    9758 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:10.850100    9758 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:10.850493    9758 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:10.900204    9758 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:10.900290    9758 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:10.900310    9758 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:10.900334    9758 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:10.900354    9758 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr: exit status 7 (119.784792ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:11.667578    9762 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:11.667867    9762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:11.667872    9762 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:11.667876    9762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:11.668039    9762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:11.668214    9762 out.go:298] Setting JSON to false
	I0319 13:03:11.668235    9762 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:11.668272    9762 notify.go:220] Checking for updates...
	I0319 13:03:11.668510    9762 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:11.668526    9762 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:11.668893    9762 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:11.720223    9762 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:11.720295    9762 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:11.720319    9762 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:11.720347    9762 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:11.720355    9762 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr: exit status 7 (121.028165ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:13.452241    9766 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:13.452998    9766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:13.453007    9766 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:13.453013    9766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:13.453682    9766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:13.453871    9766 out.go:298] Setting JSON to false
	I0319 13:03:13.453893    9766 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:13.453928    9766 notify.go:220] Checking for updates...
	I0319 13:03:13.454149    9766 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:13.454164    9766 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:13.454543    9766 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:13.506614    9766 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:13.506697    9766 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:13.506718    9766 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:13.506739    9766 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:13.506747    9766 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr: exit status 7 (117.082397ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:15.316778    9773 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:15.317058    9773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:15.317064    9773 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:15.317068    9773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:15.317228    9773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:15.318298    9773 out.go:298] Setting JSON to false
	I0319 13:03:15.318326    9773 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:15.318365    9773 notify.go:220] Checking for updates...
	I0319 13:03:15.318594    9773 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:15.318615    9773 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:15.319000    9773 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:15.369004    9773 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:15.369065    9773 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:15.369094    9773 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:15.369130    9773 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:15.369139    9773 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr: exit status 7 (116.363114ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:18.048901    9779 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:18.049579    9779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:18.049587    9779 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:18.049593    9779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:18.050061    9779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:18.050424    9779 out.go:298] Setting JSON to false
	I0319 13:03:18.050448    9779 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:18.050492    9779 notify.go:220] Checking for updates...
	I0319 13:03:18.050725    9779 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:18.050739    9779 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:18.051111    9779 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:18.101407    9779 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:18.101470    9779 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:18.101505    9779 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:18.101539    9779 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:18.101548    9779 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr: exit status 7 (116.880832ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:25.225201    9787 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:25.225836    9787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:25.225845    9787 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:25.225851    9787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:25.226335    9787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:25.226785    9787 out.go:298] Setting JSON to false
	I0319 13:03:25.226810    9787 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:25.226860    9787 notify.go:220] Checking for updates...
	I0319 13:03:25.227096    9787 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:25.227111    9787 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:25.227494    9787 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:25.277833    9787 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:25.277909    9787 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:25.277929    9787 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:25.277955    9787 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:25.277962    9787 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr: exit status 7 (121.737189ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:34.168518    9794 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:34.168733    9794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:34.168738    9794 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:34.168755    9794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:34.168949    9794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:34.169180    9794 out.go:298] Setting JSON to false
	I0319 13:03:34.169219    9794 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:34.169261    9794 notify.go:220] Checking for updates...
	I0319 13:03:34.170476    9794 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:34.170497    9794 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:34.170898    9794 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:34.222891    9794 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:34.222970    9794 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:34.222991    9794 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:34.223014    9794 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:34.223022    9794 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr: exit status 7 (114.416409ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:03:44.447693    9800 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:03:44.447880    9800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:44.447886    9800 out.go:304] Setting ErrFile to fd 2...
	I0319 13:03:44.447890    9800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:03:44.448069    9800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:03:44.448243    9800 out.go:298] Setting JSON to false
	I0319 13:03:44.448264    9800 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:03:44.448309    9800 notify.go:220] Checking for updates...
	I0319 13:03:44.448535    9800 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:03:44.448550    9800 status.go:255] checking status of multinode-472000 ...
	I0319 13:03:44.448939    9800 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:03:44.498352    9800 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:03:44.498426    9800 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:03:44.498451    9800 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:03:44.498472    9800 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:03:44.498483    9800 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr: exit status 7 (120.434016ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:04:02.359217    9807 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:04:02.359397    9807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:04:02.359403    9807 out.go:304] Setting ErrFile to fd 2...
	I0319 13:04:02.359407    9807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:04:02.359584    9807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:04:02.359772    9807 out.go:298] Setting JSON to false
	I0319 13:04:02.359793    9807 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:04:02.359835    9807 notify.go:220] Checking for updates...
	I0319 13:04:02.360070    9807 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:04:02.360086    9807 status.go:255] checking status of multinode-472000 ...
	I0319 13:04:02.360478    9807 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:02.412076    9807 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:02.412165    9807 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:04:02.412186    9807 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:04:02.412210    9807 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:04:02.412222    9807 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-472000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "320bdb5d70e70c7ae81aeec68fe9db7403b5867010e14ced1d0a0d1c9a0809d4",
	        "Created": "2024-03-19T19:55:19.97078513Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (113.369057ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:04:02.579036    9813 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (785.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-472000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-472000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-472000: exit status 82 (15.494654905s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-472000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-472000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-472000 --wait=true -v=8 --alsologtostderr
E0319 13:04:42.657014    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:04:59.600705    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:05:25.863952    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 13:09:59.603172    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:10:08.912215    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 13:10:25.865542    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 13:14:59.606477    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:15:25.868865    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-472000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m50.158054188s)

                                                
                                                
-- stdout --
	* [multinode-472000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-472000" primary control-plane node in "multinode-472000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* docker "multinode-472000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-472000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:04:18.199347    9837 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:04:18.200082    9837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:04:18.200091    9837 out.go:304] Setting ErrFile to fd 2...
	I0319 13:04:18.200097    9837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:04:18.200556    9837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:04:18.202267    9837 out.go:298] Setting JSON to false
	I0319 13:04:18.224723    9837 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3829,"bootTime":1710874829,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 13:04:18.224824    9837 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 13:04:18.246122    9837 out.go:177] * [multinode-472000] minikube v1.32.0 on Darwin 14.3.1
	I0319 13:04:18.309822    9837 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 13:04:18.288901    9837 notify.go:220] Checking for updates...
	I0319 13:04:18.352868    9837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 13:04:18.373905    9837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 13:04:18.394961    9837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 13:04:18.415920    9837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	I0319 13:04:18.436869    9837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 13:04:18.458592    9837 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:04:18.458754    9837 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 13:04:18.514972    9837 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 13:04:18.515145    9837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:04:18.614149    9837 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:101 SystemTime:2024-03-19 20:04:18.603756329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:04:18.656165    9837 out.go:177] * Using the docker driver based on existing profile
	I0319 13:04:18.677546    9837 start.go:297] selected driver: docker
	I0319 13:04:18.677571    9837 start.go:901] validating driver "docker" against &{Name:multinode-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-472000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 13:04:18.677693    9837 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 13:04:18.677902    9837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:04:18.777609    9837 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:101 SystemTime:2024-03-19 20:04:18.767408417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:04:18.780637    9837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 13:04:18.780713    9837 cni.go:84] Creating CNI manager for ""
	I0319 13:04:18.780723    9837 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0319 13:04:18.780801    9837 start.go:340] cluster config:
	{Name:multinode-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 13:04:18.823369    9837 out.go:177] * Starting "multinode-472000" primary control-plane node in "multinode-472000" cluster
	I0319 13:04:18.844556    9837 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 13:04:18.866528    9837 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0319 13:04:18.909612    9837 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:04:18.909673    9837 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 13:04:18.909695    9837 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0319 13:04:18.909722    9837 cache.go:56] Caching tarball of preloaded images
	I0319 13:04:18.909974    9837 preload.go:173] Found /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0319 13:04:18.909995    9837 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0319 13:04:18.910959    9837 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/multinode-472000/config.json ...
	I0319 13:04:18.961231    9837 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0319 13:04:18.961251    9837 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0319 13:04:18.961270    9837 cache.go:194] Successfully downloaded all kic artifacts
	I0319 13:04:18.961312    9837 start.go:360] acquireMachinesLock for multinode-472000: {Name:mk0f09b10168214c476d3d2276b0688fe6ad0b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:04:18.961399    9837 start.go:364] duration metric: took 69.534µs to acquireMachinesLock for "multinode-472000"
	I0319 13:04:18.961423    9837 start.go:96] Skipping create...Using existing machine configuration
	I0319 13:04:18.961432    9837 fix.go:54] fixHost starting: 
	I0319 13:04:18.961687    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:19.011240    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:19.011303    9837 fix.go:112] recreateIfNeeded on multinode-472000: state= err=unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:19.011325    9837 fix.go:117] machineExists: false. err=machine does not exist
	I0319 13:04:19.033057    9837 out.go:177] * docker "multinode-472000" container is missing, will recreate.
	I0319 13:04:19.074792    9837 delete.go:124] DEMOLISHING multinode-472000 ...
	I0319 13:04:19.074988    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:19.126443    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	W0319 13:04:19.126502    9837 stop.go:83] unable to get state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:19.126521    9837 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:19.126878    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:19.175698    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:19.175749    9837 delete.go:82] Unable to get host status for multinode-472000, assuming it has already been deleted: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:19.175842    9837 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-472000
	W0319 13:04:19.225857    9837 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-472000 returned with exit code 1
	I0319 13:04:19.225890    9837 kic.go:371] could not find the container multinode-472000 to remove it. will try anyways
	I0319 13:04:19.225971    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:19.275498    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	W0319 13:04:19.275542    9837 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:19.275615    9837 cli_runner.go:164] Run: docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0"
	W0319 13:04:19.324904    9837 cli_runner.go:211] docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0319 13:04:19.324932    9837 oci.go:650] error shutdown multinode-472000: docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:20.325863    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:20.379011    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:20.379053    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:20.379060    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:04:20.379100    9837 retry.go:31] will retry after 367.133028ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:20.747795    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:20.800238    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:20.800288    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:20.800308    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:04:20.800337    9837 retry.go:31] will retry after 530.199477ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:21.332097    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:21.385399    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:21.385444    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:21.385452    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:04:21.385473    9837 retry.go:31] will retry after 1.385970593s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:22.772993    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:22.824362    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:22.824403    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:22.824411    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:04:22.824434    9837 retry.go:31] will retry after 1.058460263s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:23.885285    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:23.936649    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:23.936706    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:23.936717    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:04:23.936743    9837 retry.go:31] will retry after 2.434752595s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:26.371828    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:26.423032    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:26.423074    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:26.423083    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:04:26.423109    9837 retry.go:31] will retry after 4.37880211s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:30.803452    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:30.854662    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:30.854706    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:30.854723    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:04:30.854756    9837 retry.go:31] will retry after 3.304482436s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:34.159718    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:04:34.213064    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:04:34.213104    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:04:34.213116    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:04:34.213150    9837 oci.go:88] couldn't shut down multinode-472000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	 
	I0319 13:04:34.213220    9837 cli_runner.go:164] Run: docker rm -f -v multinode-472000
	I0319 13:04:34.263177    9837 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-472000
	W0319 13:04:34.312080    9837 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-472000 returned with exit code 1
	I0319 13:04:34.312186    9837 cli_runner.go:164] Run: docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:04:34.362872    9837 cli_runner.go:164] Run: docker network rm multinode-472000
	I0319 13:04:34.464773    9837 fix.go:124] Sleeping 1 second for extra luck!
	I0319 13:04:35.466961    9837 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:04:35.490347    9837 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0319 13:04:35.490528    9837 start.go:159] libmachine.API.Create for "multinode-472000" (driver="docker")
	I0319 13:04:35.490573    9837 client.go:168] LocalClient.Create starting
	I0319 13:04:35.490758    9837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:04:35.490848    9837 main.go:141] libmachine: Decoding PEM data...
	I0319 13:04:35.490881    9837 main.go:141] libmachine: Parsing certificate...
	I0319 13:04:35.490972    9837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:04:35.491041    9837 main.go:141] libmachine: Decoding PEM data...
	I0319 13:04:35.491055    9837 main.go:141] libmachine: Parsing certificate...
	I0319 13:04:35.512076    9837 cli_runner.go:164] Run: docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:04:35.562037    9837 cli_runner.go:211] docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:04:35.562117    9837 network_create.go:281] running [docker network inspect multinode-472000] to gather additional debugging logs...
	I0319 13:04:35.562134    9837 cli_runner.go:164] Run: docker network inspect multinode-472000
	W0319 13:04:35.611612    9837 cli_runner.go:211] docker network inspect multinode-472000 returned with exit code 1
	I0319 13:04:35.611641    9837 network_create.go:284] error running [docker network inspect multinode-472000]: docker network inspect multinode-472000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-472000 not found
	I0319 13:04:35.611659    9837 network_create.go:286] output of [docker network inspect multinode-472000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-472000 not found
	
	** /stderr **
	I0319 13:04:35.611791    9837 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:04:35.663140    9837 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:04:35.664797    9837 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:04:35.665157    9837 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025ac120}
	I0319 13:04:35.665172    9837 network_create.go:124] attempt to create docker network multinode-472000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0319 13:04:35.665239    9837 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000
	W0319 13:04:35.715128    9837 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000 returned with exit code 1
	W0319 13:04:35.715168    9837 network_create.go:149] failed to create docker network multinode-472000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0319 13:04:35.715187    9837 network_create.go:116] failed to create docker network multinode-472000 192.168.67.0/24, will retry: subnet is taken
	I0319 13:04:35.716858    9837 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:04:35.717235    9837 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024c8d80}
	I0319 13:04:35.717247    9837 network_create.go:124] attempt to create docker network multinode-472000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0319 13:04:35.717317    9837 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000
	I0319 13:04:35.803043    9837 network_create.go:108] docker network multinode-472000 192.168.76.0/24 created
	I0319 13:04:35.803083    9837 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-472000" container
	I0319 13:04:35.803202    9837 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:04:35.853936    9837 cli_runner.go:164] Run: docker volume create multinode-472000 --label name.minikube.sigs.k8s.io=multinode-472000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:04:35.903514    9837 oci.go:103] Successfully created a docker volume multinode-472000
	I0319 13:04:35.903640    9837 cli_runner.go:164] Run: docker run --rm --name multinode-472000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-472000 --entrypoint /usr/bin/test -v multinode-472000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:04:36.186766    9837 oci.go:107] Successfully prepared a docker volume multinode-472000
	I0319 13:04:36.186812    9837 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:04:36.186824    9837 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:04:36.186937    9837 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-472000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 13:10:35.495452    9837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:10:35.495598    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:35.547993    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:35.548106    9837 retry.go:31] will retry after 244.759596ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:35.794479    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:35.845644    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:35.845769    9837 retry.go:31] will retry after 188.854411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:36.035255    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:36.088201    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:36.088302    9837 retry.go:31] will retry after 364.685039ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:36.455326    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:36.509110    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:36.509217    9837 retry.go:31] will retry after 789.776691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:37.300827    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:37.353228    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:10:37.353333    9837 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:10:37.353353    9837 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:37.353415    9837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:10:37.353464    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:37.404755    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:37.404857    9837 retry.go:31] will retry after 130.113494ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:37.537341    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:37.592413    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:37.592520    9837 retry.go:31] will retry after 272.703254ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:37.866603    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:37.917168    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:37.917262    9837 retry.go:31] will retry after 604.651392ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:38.522709    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:38.573475    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:10:38.573576    9837 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:10:38.573593    9837 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:38.573605    9837 start.go:128] duration metric: took 6m3.103716624s to createHost
	I0319 13:10:38.573669    9837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:10:38.573721    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:38.622287    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:38.622374    9837 retry.go:31] will retry after 292.135519ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:38.916897    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:38.967482    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:38.967577    9837 retry.go:31] will retry after 474.520964ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:39.442840    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:39.495632    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:39.495725    9837 retry.go:31] will retry after 290.690305ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:39.787250    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:39.837139    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:10:39.837242    9837 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:10:39.837257    9837 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:39.837316    9837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:10:39.837370    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:39.886658    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:39.886746    9837 retry.go:31] will retry after 141.721494ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:40.030478    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:40.083462    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:40.083563    9837 retry.go:31] will retry after 217.563884ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:40.301520    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:40.353096    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:40.353191    9837 retry.go:31] will retry after 671.468159ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:41.025176    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:41.077132    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:10:41.077224    9837 retry.go:31] will retry after 729.045455ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:41.806829    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:10:41.857566    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:10:41.857677    9837 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:10:41.857690    9837 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:41.857700    9837 fix.go:56] duration metric: took 6m22.893250687s for fixHost
	I0319 13:10:41.857705    9837 start.go:83] releasing machines lock for "multinode-472000", held for 6m22.893279834s
	W0319 13:10:41.857721    9837 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0319 13:10:41.857780    9837 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0319 13:10:41.857786    9837 start.go:728] Will try again in 5 seconds ...
	I0319 13:10:46.860021    9837 start.go:360] acquireMachinesLock for multinode-472000: {Name:mk0f09b10168214c476d3d2276b0688fe6ad0b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:10:46.860210    9837 start.go:364] duration metric: took 141.109µs to acquireMachinesLock for "multinode-472000"
	I0319 13:10:46.860244    9837 start.go:96] Skipping create...Using existing machine configuration
	I0319 13:10:46.860252    9837 fix.go:54] fixHost starting: 
	I0319 13:10:46.860690    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:46.913771    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:10:46.913817    9837 fix.go:112] recreateIfNeeded on multinode-472000: state= err=unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:46.913832    9837 fix.go:117] machineExists: false. err=machine does not exist
	I0319 13:10:46.934913    9837 out.go:177] * docker "multinode-472000" container is missing, will recreate.
	I0319 13:10:46.978800    9837 delete.go:124] DEMOLISHING multinode-472000 ...
	I0319 13:10:46.978997    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:47.029414    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	W0319 13:10:47.029469    9837 stop.go:83] unable to get state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:47.029491    9837 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:47.029842    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:47.079189    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:10:47.079249    9837 delete.go:82] Unable to get host status for multinode-472000, assuming it has already been deleted: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:47.079329    9837 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-472000
	W0319 13:10:47.129180    9837 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-472000 returned with exit code 1
	I0319 13:10:47.129210    9837 kic.go:371] could not find the container multinode-472000 to remove it. will try anyways
	I0319 13:10:47.129278    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:47.178043    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	W0319 13:10:47.178085    9837 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:47.178165    9837 cli_runner.go:164] Run: docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0"
	W0319 13:10:47.227232    9837 cli_runner.go:211] docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0319 13:10:47.227269    9837 oci.go:650] error shutdown multinode-472000: docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:48.229614    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:48.283169    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:10:48.283216    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:48.283228    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:10:48.283248    9837 retry.go:31] will retry after 557.59424ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:48.842316    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:48.892259    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:10:48.892312    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:48.892321    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:10:48.892344    9837 retry.go:31] will retry after 457.907892ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:49.350610    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:49.400927    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:10:49.400973    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:49.400984    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:10:49.401005    9837 retry.go:31] will retry after 1.420270325s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:50.821592    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:50.872372    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:10:50.872417    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:50.872429    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:10:50.872457    9837 retry.go:31] will retry after 1.659850602s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:52.534015    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:52.586843    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:10:52.586888    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:52.586897    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:10:52.586922    9837 retry.go:31] will retry after 3.283126699s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:55.872446    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:10:55.924074    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:10:55.924119    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:10:55.924127    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:10:55.924153    9837 retry.go:31] will retry after 5.597985819s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:11:01.522642    9837 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:11:01.575290    9837 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:11:01.575343    9837 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:11:01.575353    9837 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:11:01.575380    9837 oci.go:88] couldn't shut down multinode-472000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	 
	I0319 13:11:01.575449    9837 cli_runner.go:164] Run: docker rm -f -v multinode-472000
	I0319 13:11:01.627159    9837 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-472000
	W0319 13:11:01.676063    9837 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-472000 returned with exit code 1
	I0319 13:11:01.676166    9837 cli_runner.go:164] Run: docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:11:01.725793    9837 cli_runner.go:164] Run: docker network rm multinode-472000
	I0319 13:11:01.874373    9837 fix.go:124] Sleeping 1 second for extra luck!
	I0319 13:11:02.874563    9837 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:11:02.919202    9837 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0319 13:11:02.919367    9837 start.go:159] libmachine.API.Create for "multinode-472000" (driver="docker")
	I0319 13:11:02.919393    9837 client.go:168] LocalClient.Create starting
	I0319 13:11:02.919595    9837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:11:02.919687    9837 main.go:141] libmachine: Decoding PEM data...
	I0319 13:11:02.919710    9837 main.go:141] libmachine: Parsing certificate...
	I0319 13:11:02.919792    9837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:11:02.919860    9837 main.go:141] libmachine: Decoding PEM data...
	I0319 13:11:02.919886    9837 main.go:141] libmachine: Parsing certificate...
	I0319 13:11:02.920668    9837 cli_runner.go:164] Run: docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:11:02.972285    9837 cli_runner.go:211] docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:11:02.972373    9837 network_create.go:281] running [docker network inspect multinode-472000] to gather additional debugging logs...
	I0319 13:11:02.972391    9837 cli_runner.go:164] Run: docker network inspect multinode-472000
	W0319 13:11:03.021684    9837 cli_runner.go:211] docker network inspect multinode-472000 returned with exit code 1
	I0319 13:11:03.021716    9837 network_create.go:284] error running [docker network inspect multinode-472000]: docker network inspect multinode-472000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-472000 not found
	I0319 13:11:03.021727    9837 network_create.go:286] output of [docker network inspect multinode-472000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-472000 not found
	
	** /stderr **
	I0319 13:11:03.021846    9837 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:11:03.072627    9837 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:11:03.074212    9837 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:11:03.075755    9837 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:11:03.077293    9837 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:11:03.077633    9837 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002120370}
	I0319 13:11:03.077647    9837 network_create.go:124] attempt to create docker network multinode-472000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0319 13:11:03.077725    9837 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000
	I0319 13:11:03.162583    9837 network_create.go:108] docker network multinode-472000 192.168.85.0/24 created
	I0319 13:11:03.162618    9837 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-472000" container
	I0319 13:11:03.162720    9837 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:11:03.235088    9837 cli_runner.go:164] Run: docker volume create multinode-472000 --label name.minikube.sigs.k8s.io=multinode-472000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:11:03.284505    9837 oci.go:103] Successfully created a docker volume multinode-472000
	I0319 13:11:03.284631    9837 cli_runner.go:164] Run: docker run --rm --name multinode-472000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-472000 --entrypoint /usr/bin/test -v multinode-472000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:11:03.570893    9837 oci.go:107] Successfully prepared a docker volume multinode-472000
	I0319 13:11:03.570931    9837 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:11:03.570945    9837 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:11:03.571049    9837 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-472000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0319 13:17:02.922682    9837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:17:02.922826    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:02.975879    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:02.975992    9837 retry.go:31] will retry after 318.061978ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:03.296412    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:03.347267    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:03.347382    9837 retry.go:31] will retry after 349.926117ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:03.699745    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:03.751499    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:03.751600    9837 retry.go:31] will retry after 306.77349ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:04.058883    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:04.111495    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:17:04.111603    9837 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:17:04.111622    9837 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:04.111681    9837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:17:04.111749    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:04.161118    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:04.161216    9837 retry.go:31] will retry after 285.690087ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:04.447927    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:04.500752    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:04.500863    9837 retry.go:31] will retry after 289.014469ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:04.790563    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:04.841745    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:04.841841    9837 retry.go:31] will retry after 421.783456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:05.263983    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:05.316703    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:05.316806    9837 retry.go:31] will retry after 515.332951ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:05.832721    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:05.886101    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:17:05.886207    9837 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:17:05.886223    9837 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:05.886234    9837 start.go:128] duration metric: took 6m3.00878503s to createHost
	I0319 13:17:05.886302    9837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 13:17:05.886358    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:05.936776    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:05.936869    9837 retry.go:31] will retry after 360.953838ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:06.299402    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:06.351460    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:06.351564    9837 retry.go:31] will retry after 324.090723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:06.678140    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:06.729858    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:06.729954    9837 retry.go:31] will retry after 332.560338ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:07.063641    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:07.116750    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:17:07.116856    9837 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:17:07.116875    9837 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:07.116931    9837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0319 13:17:07.116985    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:07.166081    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:07.166174    9837 retry.go:31] will retry after 227.944768ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:07.394505    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:07.445638    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:07.445733    9837 retry.go:31] will retry after 222.371322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:07.668405    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:07.720919    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	I0319 13:17:07.721012    9837 retry.go:31] will retry after 355.427204ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:08.078823    9837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000
	W0319 13:17:08.130164    9837 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000 returned with exit code 1
	W0319 13:17:08.130266    9837 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	W0319 13:17:08.130280    9837 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-472000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:08.130292    9837 fix.go:56] duration metric: took 6m21.267035286s for fixHost
	I0319 13:17:08.130299    9837 start.go:83] releasing machines lock for "multinode-472000", held for 6m21.26706955s
	W0319 13:17:08.130381    9837 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-472000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-472000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0319 13:17:08.173978    9837 out.go:177] 
	W0319 13:17:08.196079    9837 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0319 13:17:08.196143    9837 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0319 13:17:08.196170    9837 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0319 13:17:08.239788    9837 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-472000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-472000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "7ca3f7eae09cd6a0beb2cff68fa0ac2eaef2c5c90411d4c0315102aa51070a6d",
	        "Created": "2024-03-19T20:11:03.123319799Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (113.42427ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:17:08.547734   10282 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (785.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 node delete m03: exit status 80 (202.550513ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-472000 host status: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-472000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr: exit status 7 (114.267913ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:17:08.812836   10290 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:17:08.813020   10290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:17:08.813025   10290 out.go:304] Setting ErrFile to fd 2...
	I0319 13:17:08.813028   10290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:17:08.813200   10290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:17:08.813378   10290 out.go:298] Setting JSON to false
	I0319 13:17:08.813399   10290 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:17:08.813440   10290 notify.go:220] Checking for updates...
	I0319 13:17:08.814240   10290 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:17:08.814269   10290 status.go:255] checking status of multinode-472000 ...
	I0319 13:17:08.815009   10290 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:08.864806   10290 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:08.864883   10290 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:17:08.864904   10290 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:17:08.864927   10290 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:17:08.864935   10290 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "7ca3f7eae09cd6a0beb2cff68fa0ac2eaef2c5c90411d4c0315102aa51070a6d",
	        "Created": "2024-03-19T20:11:03.123319799Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (114.806277ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:17:09.033274   10296 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 stop: exit status 82 (14.783232559s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	* Stopping node "multinode-472000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-472000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-472000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status: exit status 7 (113.585314ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:17:23.930606   10323 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:17:23.930619   10323 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr: exit status 7 (113.651699ms)

                                                
                                                
-- stdout --
	multinode-472000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:17:23.993364   10327 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:17:23.993625   10327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:17:23.993630   10327 out.go:304] Setting ErrFile to fd 2...
	I0319 13:17:23.993635   10327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:17:23.993811   10327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:17:23.993979   10327 out.go:298] Setting JSON to false
	I0319 13:17:23.994000   10327 mustload.go:65] Loading cluster: multinode-472000
	I0319 13:17:23.994036   10327 notify.go:220] Checking for updates...
	I0319 13:17:23.994272   10327 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:17:23.994287   10327 status.go:255] checking status of multinode-472000 ...
	I0319 13:17:23.994668   10327 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:24.044401   10327 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:24.044474   10327 status.go:330] multinode-472000 host status = "" (err=state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	)
	I0319 13:17:24.044504   10327 status.go:257] multinode-472000 status: &{Name:multinode-472000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0319 13:17:24.044527   10327 status.go:260] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	E0319 13:17:24.044535   10327 status.go:263] The "multinode-472000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr": multinode-472000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-472000 status --alsologtostderr": multinode-472000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "7ca3f7eae09cd6a0beb2cff68fa0ac2eaef2c5c90411d4c0315102aa51070a6d",
	        "Created": "2024-03-19T20:11:03.123319799Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (113.378636ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:17:24.211787   10333 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (15.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (91.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-472000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-472000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m31.050005582s)

                                                
                                                
-- stdout --
	* [multinode-472000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-472000" primary control-plane node in "multinode-472000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* docker "multinode-472000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 13:17:24.275466   10337 out.go:291] Setting OutFile to fd 1 ...
	I0319 13:17:24.275623   10337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:17:24.275628   10337 out.go:304] Setting ErrFile to fd 2...
	I0319 13:17:24.275632   10337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 13:17:24.275805   10337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 13:17:24.277210   10337 out.go:298] Setting JSON to false
	I0319 13:17:24.299677   10337 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4615,"bootTime":1710874829,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 13:17:24.299779   10337 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 13:17:24.322013   10337 out.go:177] * [multinode-472000] minikube v1.32.0 on Darwin 14.3.1
	I0319 13:17:24.364652   10337 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 13:17:24.364699   10337 notify.go:220] Checking for updates...
	I0319 13:17:24.386586   10337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 13:17:24.429332   10337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 13:17:24.450621   10337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 13:17:24.472425   10337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	I0319 13:17:24.514519   10337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 13:17:24.536124   10337 config.go:182] Loaded profile config "multinode-472000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 13:17:24.536900   10337 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 13:17:24.593163   10337 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 13:17:24.593337   10337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:17:24.691972   10337 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:121 SystemTime:2024-03-19 20:17:24.681704714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:17:24.735308   10337 out.go:177] * Using the docker driver based on existing profile
	I0319 13:17:24.756962   10337 start.go:297] selected driver: docker
	I0319 13:17:24.757037   10337 start.go:901] validating driver "docker" against &{Name:multinode-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-472000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 13:17:24.757146   10337 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 13:17:24.757382   10337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 13:17:24.859164   10337 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:121 SystemTime:2024-03-19 20:17:24.849362367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 13:17:24.862214   10337 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0319 13:17:24.862286   10337 cni.go:84] Creating CNI manager for ""
	I0319 13:17:24.862296   10337 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0319 13:17:24.862372   10337 start.go:340] cluster config:
	{Name:multinode-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 13:17:24.906093   10337 out.go:177] * Starting "multinode-472000" primary control-plane node in "multinode-472000" cluster
	I0319 13:17:24.927917   10337 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 13:17:24.950062   10337 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0319 13:17:24.992810   10337 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:17:24.992869   10337 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 13:17:24.992898   10337 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0319 13:17:24.992933   10337 cache.go:56] Caching tarball of preloaded images
	I0319 13:17:24.993285   10337 preload.go:173] Found /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0319 13:17:24.993861   10337 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0319 13:17:24.994349   10337 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/multinode-472000/config.json ...
	I0319 13:17:25.044697   10337 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0319 13:17:25.044716   10337 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0319 13:17:25.044737   10337 cache.go:194] Successfully downloaded all kic artifacts
	I0319 13:17:25.044778   10337 start.go:360] acquireMachinesLock for multinode-472000: {Name:mk0f09b10168214c476d3d2276b0688fe6ad0b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0319 13:17:25.044862   10337 start.go:364] duration metric: took 65.974µs to acquireMachinesLock for "multinode-472000"
	I0319 13:17:25.044883   10337 start.go:96] Skipping create...Using existing machine configuration
	I0319 13:17:25.044892   10337 fix.go:54] fixHost starting: 
	I0319 13:17:25.045127   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:25.094922   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:25.094976   10337 fix.go:112] recreateIfNeeded on multinode-472000: state= err=unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:25.094997   10337 fix.go:117] machineExists: false. err=machine does not exist
	I0319 13:17:25.116944   10337 out.go:177] * docker "multinode-472000" container is missing, will recreate.
	I0319 13:17:25.159255   10337 delete.go:124] DEMOLISHING multinode-472000 ...
	I0319 13:17:25.159448   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:25.209872   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	W0319 13:17:25.209919   10337 stop.go:83] unable to get state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:25.209938   10337 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:25.210300   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:25.259488   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:25.259554   10337 delete.go:82] Unable to get host status for multinode-472000, assuming it has already been deleted: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:25.259641   10337 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-472000
	W0319 13:17:25.309378   10337 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-472000 returned with exit code 1
	I0319 13:17:25.309413   10337 kic.go:371] could not find the container multinode-472000 to remove it. will try anyways
	I0319 13:17:25.309491   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:25.359572   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	W0319 13:17:25.359618   10337 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:25.359697   10337 cli_runner.go:164] Run: docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0"
	W0319 13:17:25.409026   10337 cli_runner.go:211] docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0319 13:17:25.409058   10337 oci.go:650] error shutdown multinode-472000: docker exec --privileged -t multinode-472000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:26.411307   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:26.465484   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:26.465530   10337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:26.465540   10337 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:17:26.465573   10337 retry.go:31] will retry after 722.069381ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:27.188032   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:27.239976   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:27.240023   10337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:27.240031   10337 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:17:27.240060   10337 retry.go:31] will retry after 971.514474ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:28.213908   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:28.266883   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:28.266927   10337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:28.266933   10337 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:17:28.266971   10337 retry.go:31] will retry after 637.162058ms: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:28.904698   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:28.956317   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:28.956362   10337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:28.956371   10337 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:17:28.956399   10337 retry.go:31] will retry after 1.276180134s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:30.234973   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:30.286082   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:30.286124   10337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:30.286131   10337 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:17:30.286156   10337 retry.go:31] will retry after 3.578671022s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:33.866626   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:33.918669   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:33.918713   10337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:33.918723   10337 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:17:33.918751   10337 retry.go:31] will retry after 3.49580826s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:37.416107   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:37.469186   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:37.469238   10337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:37.469246   10337 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:17:37.469271   10337 retry.go:31] will retry after 3.411761868s: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:40.948318   10337 cli_runner.go:164] Run: docker container inspect multinode-472000 --format={{.State.Status}}
	W0319 13:17:41.000615   10337 cli_runner.go:211] docker container inspect multinode-472000 --format={{.State.Status}} returned with exit code 1
	I0319 13:17:41.000659   10337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	I0319 13:17:41.000668   10337 oci.go:664] temporary error: container multinode-472000 status is  but expect it to be exited
	I0319 13:17:41.000696   10337 oci.go:88] couldn't shut down multinode-472000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000
	 
	I0319 13:17:41.000767   10337 cli_runner.go:164] Run: docker rm -f -v multinode-472000
	I0319 13:17:41.050562   10337 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-472000
	W0319 13:17:41.099524   10337 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-472000 returned with exit code 1
	I0319 13:17:41.099640   10337 cli_runner.go:164] Run: docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:17:41.149203   10337 cli_runner.go:164] Run: docker network rm multinode-472000
	I0319 13:17:41.257447   10337 fix.go:124] Sleeping 1 second for extra luck!
	I0319 13:17:42.257671   10337 start.go:125] createHost starting for "" (driver="docker")
	I0319 13:17:42.279915   10337 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0319 13:17:42.280089   10337 start.go:159] libmachine.API.Create for "multinode-472000" (driver="docker")
	I0319 13:17:42.280133   10337 client.go:168] LocalClient.Create starting
	I0319 13:17:42.280309   10337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/ca.pem
	I0319 13:17:42.280398   10337 main.go:141] libmachine: Decoding PEM data...
	I0319 13:17:42.280433   10337 main.go:141] libmachine: Parsing certificate...
	I0319 13:17:42.280540   10337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18453-925/.minikube/certs/cert.pem
	I0319 13:17:42.280610   10337 main.go:141] libmachine: Decoding PEM data...
	I0319 13:17:42.280626   10337 main.go:141] libmachine: Parsing certificate...
	I0319 13:17:42.302013   10337 cli_runner.go:164] Run: docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0319 13:17:42.355295   10337 cli_runner.go:211] docker network inspect multinode-472000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0319 13:17:42.355393   10337 network_create.go:281] running [docker network inspect multinode-472000] to gather additional debugging logs...
	I0319 13:17:42.355409   10337 cli_runner.go:164] Run: docker network inspect multinode-472000
	W0319 13:17:42.405567   10337 cli_runner.go:211] docker network inspect multinode-472000 returned with exit code 1
	I0319 13:17:42.405596   10337 network_create.go:284] error running [docker network inspect multinode-472000]: docker network inspect multinode-472000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-472000 not found
	I0319 13:17:42.405608   10337 network_create.go:286] output of [docker network inspect multinode-472000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-472000 not found
	
	** /stderr **
	I0319 13:17:42.405729   10337 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0319 13:17:42.458068   10337 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:17:42.459703   10337 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:17:42.460072   10337 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000837b00}
	I0319 13:17:42.460087   10337 network_create.go:124] attempt to create docker network multinode-472000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0319 13:17:42.460152   10337 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000
	W0319 13:17:42.509930   10337 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000 returned with exit code 1
	W0319 13:17:42.509965   10337 network_create.go:149] failed to create docker network multinode-472000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0319 13:17:42.509987   10337 network_create.go:116] failed to create docker network multinode-472000 192.168.67.0/24, will retry: subnet is taken
	I0319 13:17:42.511611   10337 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0319 13:17:42.512007   10337 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002515b20}
	I0319 13:17:42.512020   10337 network_create.go:124] attempt to create docker network multinode-472000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0319 13:17:42.512093   10337 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-472000 multinode-472000
	I0319 13:17:42.596701   10337 network_create.go:108] docker network multinode-472000 192.168.76.0/24 created
	I0319 13:17:42.596741   10337 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-472000" container
	I0319 13:17:42.596844   10337 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0319 13:17:42.648150   10337 cli_runner.go:164] Run: docker volume create multinode-472000 --label name.minikube.sigs.k8s.io=multinode-472000 --label created_by.minikube.sigs.k8s.io=true
	I0319 13:17:42.697332   10337 oci.go:103] Successfully created a docker volume multinode-472000
	I0319 13:17:42.697457   10337 cli_runner.go:164] Run: docker run --rm --name multinode-472000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-472000 --entrypoint /usr/bin/test -v multinode-472000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0319 13:17:42.983412   10337 oci.go:107] Successfully prepared a docker volume multinode-472000
	I0319 13:17:42.983447   10337 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 13:17:42.983460   10337 kic.go:194] Starting extracting preloaded images to volume ...
	I0319 13:17:42.983572   10337 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-472000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-472000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-472000
helpers_test.go:235: (dbg) docker inspect multinode-472000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-472000",
	        "Id": "fda4040230db11959d5d39abf71ee7f084b779349a229bdaab39cec33db88806",
	        "Created": "2024-03-19T20:17:42.493029094Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-472000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-472000 -n multinode-472000: exit status 7 (115.692421ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:18:55.500552   10455 status.go:249] status error: host: state: unknown state "multinode-472000": docker container inspect multinode-472000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-472000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-472000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (91.22s)

                                                
                                    
x
+
TestScheduledStopUnix (300.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-603000 --memory=2048 --driver=docker 
E0319 13:21:22.729383    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:24:59.673222    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:25:25.936222    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-603000 --memory=2048 --driver=docker : signal: killed (5m0.005492912s)

                                                
                                                
-- stdout --
	* [scheduled-stop-603000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-603000" primary control-plane node in "scheduled-stop-603000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-603000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-603000" primary control-plane node in "scheduled-stop-603000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-19 13:26:17.979294 -0700 PDT m=+4928.439144695
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-603000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-603000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-603000",
	        "Id": "55aa543c7862e836351d919f08fd5947e9d60fc612f57bbf65c6b7ad6ca28c04",
	        "Created": "2024-03-19T20:21:19.159445127Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-603000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-603000 -n scheduled-stop-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-603000 -n scheduled-stop-603000: exit status 7 (113.374646ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:26:18.144299   10999 status.go:249] status error: host: state: unknown state "scheduled-stop-603000": docker container inspect scheduled-stop-603000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-603000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-603000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-603000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-603000
--- FAIL: TestScheduledStopUnix (300.91s)

                                                
                                    
x
+
TestSkaffold (300.92s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2301158047 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-909000 --memory=2600 --driver=docker 
E0319 13:26:48.983918    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 13:29:59.674481    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:30:25.937839    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-909000 --memory=2600 --driver=docker : signal: killed (4m53.820569426s)

                                                
                                                
-- stdout --
	* [skaffold-909000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-909000" primary control-plane node in "skaffold-909000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-909000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-909000" primary control-plane node in "skaffold-909000" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-03-19 13:31:18.898407 -0700 PDT m=+5229.357201123
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-909000
helpers_test.go:235: (dbg) docker inspect skaffold-909000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-909000",
	        "Id": "331f18b496f3fdefd77c229aec653c20dd1b3057e9deb2bbeca49606f75fcae3",
	        "Created": "2024-03-19T20:26:26.215836308Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-909000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-909000 -n skaffold-909000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-909000 -n skaffold-909000: exit status 7 (114.971412ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0319 13:31:19.069068   11162 status.go:249] status error: host: state: unknown state "skaffold-909000": docker container inspect skaffold-909000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-909000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-909000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-909000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-909000
--- FAIL: TestSkaffold (300.92s)

                                                
                                    
x
+
TestInsufficientStorage (300.76s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-374000 --memory=2048 --output=json --wait=true --driver=docker 
E0319 13:34:59.675264    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 13:35:25.939531    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-374000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.002475409s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"03fc774e-83e6-4d9d-96bb-e1b45e1796f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-374000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"64428985-4be8-4a17-a43c-4208ce9225f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18453"}}
	{"specversion":"1.0","id":"1d2bdaa5-3793-4446-ad91-2da774067c59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig"}}
	{"specversion":"1.0","id":"c7861adf-ae35-441b-95d9-ee8a0c2d2bc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"1e5fbcca-4349-453e-9b56-69eecd52a709","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb346ece-946e-4e54-b858-be83e52e0d88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube"}}
	{"specversion":"1.0","id":"9066f946-d282-41b6-8ae2-c1ee167bbbbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6ea01600-375a-48a2-9bc2-b9f96547aa61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c933276d-9e09-4bab-8412-9acc7d6f2270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8776bf07-83ef-4f46-9e7e-039c2f0b110e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"95c04076-c267-49d1-9545-4b94cdbb8d59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"b27d179f-a0a0-4ef6-969e-963615a3ed93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-374000\" primary control-plane node in \"insufficient-storage-374000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"066b462b-5704-4061-8026-84eaf8b372db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1710284843-18375 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"94b1e7cc-1565-4ff9-a3f7-29fe62370993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-374000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-374000 --output=json --layout=cluster: context deadline exceeded (455ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-374000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-374000
--- FAIL: TestInsufficientStorage (300.76s)

                                                
                                    

Test pass (168/209)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 42.24
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.35
9 TestDownloadOnly/v1.20.0/DeleteAll 0.66
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.38
12 TestDownloadOnly/v1.29.3/json-events 47.7
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.34
18 TestDownloadOnly/v1.29.3/DeleteAll 0.67
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.37
21 TestDownloadOnly/v1.30.0-beta.0/json-events 26.25
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.31
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.63
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.37
29 TestDownloadOnlyKic 1.87
30 TestBinaryMirror 1.64
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
36 TestAddons/Setup 224.96
40 TestAddons/parallel/InspektorGadget 11.88
41 TestAddons/parallel/MetricsServer 5.83
42 TestAddons/parallel/HelmTiller 11.22
44 TestAddons/parallel/CSI 66.92
45 TestAddons/parallel/Headlamp 13.32
46 TestAddons/parallel/CloudSpanner 5.74
47 TestAddons/parallel/LocalPath 54.47
48 TestAddons/parallel/NvidiaDevicePlugin 5.69
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
53 TestAddons/StoppedEnableDisable 11.75
61 TestHyperKitDriverInstallOrUpdate 6.65
64 TestErrorSpam/setup 22.3
65 TestErrorSpam/start 2.17
66 TestErrorSpam/status 1.3
67 TestErrorSpam/pause 1.75
68 TestErrorSpam/unpause 1.9
69 TestErrorSpam/stop 2.85
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 37.19
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 31.43
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 10.46
81 TestFunctional/serial/CacheCmd/cache/add_local 1.64
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
83 TestFunctional/serial/CacheCmd/cache/list 0.09
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.44
85 TestFunctional/serial/CacheCmd/cache/cache_reload 4.22
86 TestFunctional/serial/CacheCmd/cache/delete 0.18
87 TestFunctional/serial/MinikubeKubectlCmd 0.57
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.73
89 TestFunctional/serial/ExtraConfig 39.9
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3.22
92 TestFunctional/serial/LogsFileCmd 3.14
93 TestFunctional/serial/InvalidService 4.29
95 TestFunctional/parallel/ConfigCmd 0.56
96 TestFunctional/parallel/DashboardCmd 12.98
97 TestFunctional/parallel/DryRun 1.46
98 TestFunctional/parallel/InternationalLanguage 0.65
99 TestFunctional/parallel/StatusCmd 1.28
104 TestFunctional/parallel/AddonsCmd 0.28
105 TestFunctional/parallel/PersistentVolumeClaim 26.47
107 TestFunctional/parallel/SSHCmd 0.86
108 TestFunctional/parallel/CpCmd 2.62
109 TestFunctional/parallel/MySQL 30.77
110 TestFunctional/parallel/FileSync 0.45
111 TestFunctional/parallel/CertSync 2.56
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
119 TestFunctional/parallel/License 1.63
120 TestFunctional/parallel/Version/short 0.12
121 TestFunctional/parallel/Version/components 0.83
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
126 TestFunctional/parallel/ImageCommands/ImageBuild 5.67
127 TestFunctional/parallel/ImageCommands/Setup 5.7
128 TestFunctional/parallel/DockerEnv/bash 1.99
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.33
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.45
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.45
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.12
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.88
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.99
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.56
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.75
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.26
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.32
139 TestFunctional/parallel/ServiceCmd/DeployApp 13.13
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.15
145 TestFunctional/parallel/ServiceCmd/List 0.64
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
147 TestFunctional/parallel/ServiceCmd/HTTPS 15
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
154 TestFunctional/parallel/ServiceCmd/Format 15
155 TestFunctional/parallel/ServiceCmd/URL 15
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
157 TestFunctional/parallel/ProfileCmd/profile_list 0.57
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
159 TestFunctional/parallel/MountCmd/any-port 11.52
160 TestFunctional/parallel/MountCmd/specific-port 2.5
161 TestFunctional/parallel/MountCmd/VerifyCleanup 3.08
162 TestFunctional/delete_addon-resizer_images 0.14
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestMultiControlPlane/serial/StartCluster 101.44
169 TestMultiControlPlane/serial/DeployApp 9.53
170 TestMultiControlPlane/serial/PingHostFromPods 1.48
171 TestMultiControlPlane/serial/AddWorkerNode 20.81
172 TestMultiControlPlane/serial/NodeLabels 0.06
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.21
174 TestMultiControlPlane/serial/CopyFile 26.48
175 TestMultiControlPlane/serial/StopSecondaryNode 12.05
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.88
177 TestMultiControlPlane/serial/RestartSecondaryNode 50.07
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.19
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 181.41
180 TestMultiControlPlane/serial/DeleteSecondaryNode 12.3
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.86
182 TestMultiControlPlane/serial/StopCluster 33.05
183 TestMultiControlPlane/serial/RestartCluster 164.59
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.84
185 TestMultiControlPlane/serial/AddSecondaryNode 40.91
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.19
189 TestImageBuild/serial/Setup 22.08
190 TestImageBuild/serial/NormalBuild 4.94
191 TestImageBuild/serial/BuildWithBuildArg 1.21
192 TestImageBuild/serial/BuildWithDockerIgnore 1.05
193 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.09
197 TestJSONOutput/start/Command 39.58
198 TestJSONOutput/start/Audit 0
200 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/pause/Command 0.6
204 TestJSONOutput/pause/Audit 0
206 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/unpause/Command 0.6
210 TestJSONOutput/unpause/Audit 0
212 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
215 TestJSONOutput/stop/Command 10.85
216 TestJSONOutput/stop/Audit 0
218 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
219 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
220 TestErrorJSONOutput 0.78
222 TestKicCustomNetwork/create_custom_network 24.61
223 TestKicCustomNetwork/use_default_bridge_network 23.78
224 TestKicExistingNetwork 26.88
225 TestKicCustomSubnet 24.22
226 TestKicStaticIP 24.94
227 TestMainNoArgs 0.09
228 TestMinikubeProfile 49.66
231 TestMountStart/serial/StartWithMountFirst 8.41
251 TestPreload 141.57
272 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 20.29
273 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 22.03
x
+
TestDownloadOnly/v1.20.0/json-events (42.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-742000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-742000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (42.237918646s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (42.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-742000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-742000: exit status 85 (344.627672ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-742000 | jenkins | v1.32.0 | 19 Mar 24 12:04 PDT |          |
	|         | -p download-only-742000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 12:04:09
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 12:04:09.457749    2048 out.go:291] Setting OutFile to fd 1 ...
	I0319 12:04:09.457944    2048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:04:09.457949    2048 out.go:304] Setting ErrFile to fd 2...
	I0319 12:04:09.457953    2048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:04:09.458139    2048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	W0319 12:04:09.458261    2048 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18453-925/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18453-925/.minikube/config/config.json: no such file or directory
	I0319 12:04:09.460026    2048 out.go:298] Setting JSON to true
	I0319 12:04:09.486360    2048 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":220,"bootTime":1710874829,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 12:04:09.486456    2048 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 12:04:09.508813    2048 out.go:97] [download-only-742000] minikube v1.32.0 on Darwin 14.3.1
	I0319 12:04:09.536655    2048 out.go:169] MINIKUBE_LOCATION=18453
	I0319 12:04:09.509015    2048 notify.go:220] Checking for updates...
	W0319 12:04:09.509019    2048 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball: no such file or directory
	I0319 12:04:09.578114    2048 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 12:04:09.620136    2048 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 12:04:09.642211    2048 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 12:04:09.663896    2048 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	W0319 12:04:09.706058    2048 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0319 12:04:09.706546    2048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 12:04:09.768514    2048 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 12:04:09.768674    2048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:04:09.875192    2048 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:65 SystemTime:2024-03-19 19:04:09.863148739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:04:09.895799    2048 out.go:97] Using the docker driver based on user configuration
	I0319 12:04:09.895841    2048 start.go:297] selected driver: docker
	I0319 12:04:09.895852    2048 start.go:901] validating driver "docker" against <nil>
	I0319 12:04:09.896057    2048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:04:10.003093    2048 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:65 SystemTime:2024-03-19 19:04:09.99401677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cg
roupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for
an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:04:10.003244    2048 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 12:04:10.007626    2048 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0319 12:04:10.007790    2048 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 12:04:10.029211    2048 out.go:169] Using Docker Desktop driver with root privileges
	I0319 12:04:10.052052    2048 cni.go:84] Creating CNI manager for ""
	I0319 12:04:10.052100    2048 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0319 12:04:10.052237    2048 start.go:340] cluster config:
	{Name:download-only-742000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-742000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 12:04:10.073810    2048 out.go:97] Starting "download-only-742000" primary control-plane node in "download-only-742000" cluster
	I0319 12:04:10.073852    2048 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 12:04:10.095889    2048 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0319 12:04:10.095937    2048 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0319 12:04:10.096019    2048 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 12:04:10.146298    2048 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0319 12:04:10.146525    2048 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0319 12:04:10.146658    2048 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0319 12:04:10.489249    2048 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0319 12:04:10.489283    2048 cache.go:56] Caching tarball of preloaded images
	I0319 12:04:10.489658    2048 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0319 12:04:10.511401    2048 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0319 12:04:10.511429    2048 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0319 12:04:11.101047    2048 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0319 12:04:28.011313    2048 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0319 12:04:28.011519    2048 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0319 12:04:28.574903    2048 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0319 12:04:28.575132    2048 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/download-only-742000/config.json ...
	I0319 12:04:28.575157    2048 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/download-only-742000/config.json: {Name:mk38c18da680105aa837ebb3a63a2512681aa581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 12:04:28.575472    2048 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0319 12:04:28.575779    2048 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18453-925/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	I0319 12:04:32.268840    2048 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	
	
	* The control-plane node download-only-742000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-742000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-742000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (47.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-949000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-949000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker : (47.700938759s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (47.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-949000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-949000: exit status 85 (338.277654ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-742000 | jenkins | v1.32.0 | 19 Mar 24 12:04 PDT |                     |
	|         | -p download-only-742000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 19 Mar 24 12:04 PDT | 19 Mar 24 12:04 PDT |
	| delete  | -p download-only-742000        | download-only-742000 | jenkins | v1.32.0 | 19 Mar 24 12:04 PDT | 19 Mar 24 12:04 PDT |
	| start   | -o=json --download-only        | download-only-949000 | jenkins | v1.32.0 | 19 Mar 24 12:04 PDT |                     |
	|         | -p download-only-949000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 12:04:53
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 12:04:53.080059    2119 out.go:291] Setting OutFile to fd 1 ...
	I0319 12:04:53.080238    2119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:04:53.080243    2119 out.go:304] Setting ErrFile to fd 2...
	I0319 12:04:53.080247    2119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:04:53.080430    2119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 12:04:53.081846    2119 out.go:298] Setting JSON to true
	I0319 12:04:53.103711    2119 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":264,"bootTime":1710874829,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 12:04:53.103794    2119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 12:04:53.125121    2119 out.go:97] [download-only-949000] minikube v1.32.0 on Darwin 14.3.1
	I0319 12:04:53.146997    2119 out.go:169] MINIKUBE_LOCATION=18453
	I0319 12:04:53.125342    2119 notify.go:220] Checking for updates...
	I0319 12:04:53.191184    2119 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 12:04:53.213118    2119 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 12:04:53.234814    2119 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 12:04:53.255935    2119 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	W0319 12:04:53.297711    2119 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0319 12:04:53.298246    2119 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 12:04:53.355449    2119 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 12:04:53.355580    2119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:04:53.459331    2119 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:65 SystemTime:2024-03-19 19:04:53.449837914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:04:53.480923    2119 out.go:97] Using the docker driver based on user configuration
	I0319 12:04:53.481018    2119 start.go:297] selected driver: docker
	I0319 12:04:53.481074    2119 start.go:901] validating driver "docker" against <nil>
	I0319 12:04:53.481324    2119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:04:53.584473    2119 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:65 SystemTime:2024-03-19 19:04:53.575052924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:04:53.584657    2119 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 12:04:53.587503    2119 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0319 12:04:53.587645    2119 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 12:04:53.609511    2119 out.go:169] Using Docker Desktop driver with root privileges
	I0319 12:04:53.631515    2119 cni.go:84] Creating CNI manager for ""
	I0319 12:04:53.631559    2119 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0319 12:04:53.631574    2119 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 12:04:53.631706    2119 start.go:340] cluster config:
	{Name:download-only-949000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-949000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 12:04:53.653235    2119 out.go:97] Starting "download-only-949000" primary control-plane node in "download-only-949000" cluster
	I0319 12:04:53.653289    2119 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 12:04:53.675432    2119 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0319 12:04:53.675491    2119 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 12:04:53.675549    2119 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 12:04:53.727581    2119 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0319 12:04:53.727803    2119 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0319 12:04:53.727826    2119 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0319 12:04:53.727833    2119 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0319 12:04:53.727841    2119 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0319 12:04:53.932205    2119 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0319 12:04:53.932255    2119 cache.go:56] Caching tarball of preloaded images
	I0319 12:04:53.932586    2119 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 12:04:53.955324    2119 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0319 12:04:53.955341    2119 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0319 12:04:54.503725    2119 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2fedab548578a1509c0f422889c3109c -> /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0319 12:05:09.963909    2119 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0319 12:05:09.964098    2119 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0319 12:05:10.466649    2119 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0319 12:05:10.466893    2119 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/download-only-949000/config.json ...
	I0319 12:05:10.466917    2119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/download-only-949000/config.json: {Name:mkad38216c56ea4e41fcda4a6cc065b11a986b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 12:05:10.467226    2119 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0319 12:05:10.467431    2119 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18453-925/.minikube/cache/darwin/amd64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-949000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-949000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-949000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (26.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-488000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-488000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=docker : (26.246322876s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (26.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-488000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-488000: exit status 85 (305.731049ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-742000 | jenkins | v1.32.0 | 19 Mar 24 12:04 PDT |                     |
	|         | -p download-only-742000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.32.0 | 19 Mar 24 12:04 PDT | 19 Mar 24 12:04 PDT |
	| delete  | -p download-only-742000             | download-only-742000 | jenkins | v1.32.0 | 19 Mar 24 12:04 PDT | 19 Mar 24 12:04 PDT |
	| start   | -o=json --download-only             | download-only-949000 | jenkins | v1.32.0 | 19 Mar 24 12:04 PDT |                     |
	|         | -p download-only-949000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.32.0 | 19 Mar 24 12:05 PDT | 19 Mar 24 12:05 PDT |
	| delete  | -p download-only-949000             | download-only-949000 | jenkins | v1.32.0 | 19 Mar 24 12:05 PDT | 19 Mar 24 12:05 PDT |
	| start   | -o=json --download-only             | download-only-488000 | jenkins | v1.32.0 | 19 Mar 24 12:05 PDT |                     |
	|         | -p download-only-488000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/19 12:05:42
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0319 12:05:42.167772    2197 out.go:291] Setting OutFile to fd 1 ...
	I0319 12:05:42.167953    2197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:05:42.167958    2197 out.go:304] Setting ErrFile to fd 2...
	I0319 12:05:42.167962    2197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:05:42.168144    2197 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 12:05:42.169515    2197 out.go:298] Setting JSON to true
	I0319 12:05:42.191273    2197 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":313,"bootTime":1710874829,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 12:05:42.191365    2197 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 12:05:42.212910    2197 out.go:97] [download-only-488000] minikube v1.32.0 on Darwin 14.3.1
	I0319 12:05:42.234649    2197 out.go:169] MINIKUBE_LOCATION=18453
	I0319 12:05:42.213037    2197 notify.go:220] Checking for updates...
	I0319 12:05:42.278713    2197 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 12:05:42.322496    2197 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 12:05:42.364820    2197 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 12:05:42.385628    2197 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	W0319 12:05:42.429858    2197 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0319 12:05:42.430257    2197 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 12:05:42.487792    2197 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 12:05:42.487932    2197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:05:42.586857    2197 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:65 SystemTime:2024-03-19 19:05:42.577338436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:05:42.608689    2197 out.go:97] Using the docker driver based on user configuration
	I0319 12:05:42.608742    2197 start.go:297] selected driver: docker
	I0319 12:05:42.608754    2197 start.go:901] validating driver "docker" against <nil>
	I0319 12:05:42.608973    2197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:05:42.715414    2197 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:65 SystemTime:2024-03-19 19:05:42.704399997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:05:42.715602    2197 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0319 12:05:42.718576    2197 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0319 12:05:42.718715    2197 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0319 12:05:42.740583    2197 out.go:169] Using Docker Desktop driver with root privileges
	I0319 12:05:42.762658    2197 cni.go:84] Creating CNI manager for ""
	I0319 12:05:42.762702    2197 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0319 12:05:42.762715    2197 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0319 12:05:42.762812    2197 start.go:340] cluster config:
	{Name:download-only-488000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 12:05:42.784474    2197 out.go:97] Starting "download-only-488000" primary control-plane node in "download-only-488000" cluster
	I0319 12:05:42.784517    2197 cache.go:121] Beginning downloading kic base image for docker with docker
	I0319 12:05:42.806652    2197 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0319 12:05:42.806701    2197 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0319 12:05:42.806786    2197 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0319 12:05:42.856915    2197 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0319 12:05:42.857079    2197 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0319 12:05:42.857096    2197 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0319 12:05:42.857109    2197 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0319 12:05:42.857119    2197 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0319 12:05:43.058097    2197 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0319 12:05:43.058140    2197 cache.go:56] Caching tarball of preloaded images
	I0319 12:05:43.058482    2197 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0319 12:05:43.080463    2197 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0319 12:05:43.080490    2197 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0319 12:05:43.629811    2197 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:d024b8f2a881a92d6d422e5948616edf -> /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0319 12:05:58.537001    2197 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0319 12:05:58.537191    2197 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18453-925/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0319 12:05:59.027447    2197 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0319 12:05:59.027677    2197 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/download-only-488000/config.json ...
	I0319 12:05:59.027702    2197 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/download-only-488000/config.json: {Name:mk1ca54fa4448eda35c762b319a4a84bf32e70ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0319 12:05:59.028012    2197 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0319 12:05:59.028260    2197 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18453-925/.minikube/cache/darwin/amd64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-488000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-488000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-488000
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.87s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-735000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-735000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-735000
--- PASS: TestDownloadOnlyKic (1.87s)

                                                
                                    
x
+
TestBinaryMirror (1.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-984000 --alsologtostderr --binary-mirror http://127.0.0.1:49361 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-984000 --alsologtostderr --binary-mirror http://127.0.0.1:49361 --driver=docker : (1.038622463s)
helpers_test.go:175: Cleaning up "binary-mirror-984000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-984000
--- PASS: TestBinaryMirror (1.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-353000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-353000: exit status 85 (196.278378ms)

                                                
                                                
-- stdout --
	* Profile "addons-353000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-353000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-353000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-353000: exit status 85 (217.152919ms)

                                                
                                                
-- stdout --
	* Profile "addons-353000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-353000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (224.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-353000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-353000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m44.963913751s)
--- PASS: TestAddons/Setup (224.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5kznw" [42a8ccbc-1895-41bf-89d3-2b921f805c92] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004157773s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-353000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-353000: (5.878739045s)
--- PASS: TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.435954ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-2kj8v" [e0a59193-001f-4522-98e1-037effc7c321] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006333529s
addons_test.go:415: (dbg) Run:  kubectl --context addons-353000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-353000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.22s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.579384ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-7bd2x" [99fe72da-be93-4d8a-b140-1e3e7e201726] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004272852s
addons_test.go:473: (dbg) Run:  kubectl --context addons-353000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-353000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.374174906s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-353000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.22s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 4.36796ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [62700f7c-d154-4db8-a027-3e71ce8a5b38] Pending
helpers_test.go:344: "task-pv-pod" [62700f7c-d154-4db8-a027-3e71ce8a5b38] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [62700f7c-d154-4db8-a027-3e71ce8a5b38] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003542748s
addons_test.go:584: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-353000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-353000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-353000 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-353000 delete pod task-pv-pod: (1.041640854s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-353000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-353000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e73177d7-9a0b-4a7d-a713-45b1edd6dca8] Pending
helpers_test.go:344: "task-pv-pod-restore" [e73177d7-9a0b-4a7d-a713-45b1edd6dca8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e73177d7-9a0b-4a7d-a713-45b1edd6dca8] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.006053193s
addons_test.go:626: (dbg) Run:  kubectl --context addons-353000 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-353000 delete pod task-pv-pod-restore: (1.08533511s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-353000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-353000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-353000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-353000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.896538738s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-353000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (66.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-353000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-353000 --alsologtostderr -v=1: (1.315878964s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-jj6bk" [f8fc2bda-f9cf-4d6b-aab7-f6ca321c9c58] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-jj6bk" [f8fc2bda-f9cf-4d6b-aab7-f6ca321c9c58] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004124623s
--- PASS: TestAddons/parallel/Headlamp (13.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-nvbb9" [41ab715f-c953-4ce8-b4d8-37b64c4252ff] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003693031s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-353000
--- PASS: TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-353000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-353000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [53b1e2c3-3fa7-4bae-84a1-c6c75ee9730f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [53b1e2c3-3fa7-4bae-84a1-c6c75ee9730f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [53b1e2c3-3fa7-4bae-84a1-c6c75ee9730f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.031636379s
addons_test.go:891: (dbg) Run:  kubectl --context addons-353000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-353000 ssh "cat /opt/local-path-provisioner/pvc-ea532222-17b2-46d9-86b2-f77839241f60_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-353000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-353000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-353000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-353000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.376833146s)
--- PASS: TestAddons/parallel/LocalPath (54.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nvjml" [9c742f3d-5295-4d30-8b82-9152d699b7b1] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004458592s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-353000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-gpb4c" [feecee4e-2bf5-433a-b055-f7f6fa0c7fa4] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005112262s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-353000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-353000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-353000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-353000: (11.023095702s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-353000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-353000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-353000
--- PASS: TestAddons/StoppedEnableDisable (11.75s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.65s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.65s)

                                                
                                    
x
+
TestErrorSpam/setup (22.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-285000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-285000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 --driver=docker : (22.297433143s)
--- PASS: TestErrorSpam/setup (22.30s)

                                                
                                    
x
+
TestErrorSpam/start (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 start --dry-run
--- PASS: TestErrorSpam/start (2.17s)

                                                
                                    
x
+
TestErrorSpam/status (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 status
--- PASS: TestErrorSpam/status (1.30s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (2.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 stop: (2.233280132s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-285000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-285000 stop
--- PASS: TestErrorSpam/stop (2.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18453-925/.minikube/files/etc/test/nested/copy/2046/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-162000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-162000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.193623787s)
--- PASS: TestFunctional/serial/StartWithProxy (37.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-162000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-162000 --alsologtostderr -v=8: (31.426349524s)
functional_test.go:659: soft start took 31.426790977s for "functional-162000" cluster.
--- PASS: TestFunctional/serial/SoftStart (31.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-162000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 cache add registry.k8s.io/pause:3.1: (4.147039431s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 cache add registry.k8s.io/pause:3.3: (3.687644832s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 cache add registry.k8s.io/pause:latest: (2.628314852s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3714949330/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 cache add minikube-local-cache-test:functional-162000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 cache add minikube-local-cache-test:functional-162000: (1.093793963s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 cache delete minikube-local-cache-test:functional-162000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-162000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (427.471308ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 cache reload: (2.92244698s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 kubectl -- --context functional-162000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-162000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.73s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-162000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0319 12:14:59.476071    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:14:59.483107    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:14:59.493700    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:14:59.515116    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:14:59.557350    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:14:59.638088    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:14:59.798995    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:15:00.119502    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:15:00.759844    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:15:02.040313    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:15:04.601170    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-162000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.899532584s)
functional_test.go:757: restart took 39.899666826s for "functional-162000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-162000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 logs
E0319 12:15:09.721831    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 logs: (3.217295193s)
--- PASS: TestFunctional/serial/LogsCmd (3.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd1740396018/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd1740396018/001/logs.txt: (3.137754696s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-162000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-162000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-162000: exit status 115 (571.290674ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30329 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-162000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 config get cpus: exit status 14 (75.441505ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 config get cpus: exit status 14 (68.272374ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-162000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-162000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4476: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-162000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-162000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (750.911724ms)

                                                
                                                
-- stdout --
	* [functional-162000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 12:16:49.508894    4396 out.go:291] Setting OutFile to fd 1 ...
	I0319 12:16:49.509088    4396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:16:49.509093    4396 out.go:304] Setting ErrFile to fd 2...
	I0319 12:16:49.509096    4396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:16:49.509275    4396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 12:16:49.510635    4396 out.go:298] Setting JSON to false
	I0319 12:16:49.532812    4396 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":980,"bootTime":1710874829,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 12:16:49.532899    4396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 12:16:49.557209    4396 out.go:177] * [functional-162000] minikube v1.32.0 on Darwin 14.3.1
	I0319 12:16:49.622069    4396 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 12:16:49.599756    4396 notify.go:220] Checking for updates...
	I0319 12:16:49.665607    4396 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 12:16:49.686813    4396 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 12:16:49.728564    4396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 12:16:49.749750    4396 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	I0319 12:16:49.791659    4396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 12:16:49.813397    4396 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 12:16:49.813929    4396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 12:16:49.869154    4396 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 12:16:49.869302    4396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:16:49.975025    4396 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:79 SystemTime:2024-03-19 19:16:49.965192496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:16:50.017168    4396 out.go:177] * Using the docker driver based on existing profile
	I0319 12:16:50.040385    4396 start.go:297] selected driver: docker
	I0319 12:16:50.040423    4396 start.go:901] validating driver "docker" against &{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-162000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 12:16:50.040520    4396 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 12:16:50.081409    4396 out.go:177] 
	W0319 12:16:50.118188    4396 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0319 12:16:50.139360    4396 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-162000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-162000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-162000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (647.411351ms)

                                                
                                                
-- stdout --
	* [functional-162000] minikube v1.32.0 sur Darwin 14.3.1
	  - MINIKUBE_LOCATION=18453
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 12:16:50.960179    4446 out.go:291] Setting OutFile to fd 1 ...
	I0319 12:16:50.960465    4446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:16:50.960471    4446 out.go:304] Setting ErrFile to fd 2...
	I0319 12:16:50.960475    4446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:16:50.960679    4446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 12:16:50.962309    4446 out.go:298] Setting JSON to false
	I0319 12:16:50.984995    4446 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":981,"bootTime":1710874829,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0319 12:16:50.985082    4446 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0319 12:16:51.006646    4446 out.go:177] * [functional-162000] minikube v1.32.0 sur Darwin 14.3.1
	I0319 12:16:51.048515    4446 out.go:177]   - MINIKUBE_LOCATION=18453
	I0319 12:16:51.048570    4446 notify.go:220] Checking for updates...
	I0319 12:16:51.090549    4446 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
	I0319 12:16:51.112523    4446 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0319 12:16:51.134310    4446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0319 12:16:51.155418    4446 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube
	I0319 12:16:51.176525    4446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0319 12:16:51.198014    4446 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 12:16:51.198532    4446 driver.go:392] Setting default libvirt URI to qemu:///system
	I0319 12:16:51.253839    4446 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0319 12:16:51.253997    4446 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0319 12:16:51.359151    4446 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:79 SystemTime:2024-03-19 19:16:51.349209822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213300224 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0319 12:16:51.403584    4446 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0319 12:16:51.425387    4446 start.go:297] selected driver: docker
	I0319 12:16:51.425412    4446 start.go:901] validating driver "docker" against &{Name:functional-162000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-162000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0319 12:16:51.425522    4446 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0319 12:16:51.451394    4446 out.go:177] 
	W0319 12:16:51.472459    4446 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0319 12:16:51.493405    4446 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [33b7f399-2c78-49e7-b79a-9a54c8df506a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004935146s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-162000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-162000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-162000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-162000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aabef0ad-c9f9-426e-b3d2-cad656f69354] Pending
helpers_test.go:344: "sp-pod" [aabef0ad-c9f9-426e-b3d2-cad656f69354] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aabef0ad-c9f9-426e-b3d2-cad656f69354] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.0051505s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-162000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-162000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-162000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [56bc8517-a858-49ae-93e0-e5d83243d194] Pending
helpers_test.go:344: "sp-pod" [56bc8517-a858-49ae-93e0-e5d83243d194] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [56bc8517-a858-49ae-93e0-e5d83243d194] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003485222s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-162000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh -n functional-162000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 cp functional-162000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd2549906726/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh -n functional-162000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh -n functional-162000 "sudo cat /tmp/does/not/exist/cp-test.txt"
E0319 12:15:19.963890    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CpCmd (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-162000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-fmp7t" [81896959-ae3d-4b9a-84ff-301181fffa3d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-fmp7t" [81896959-ae3d-4b9a-84ff-301181fffa3d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.004973983s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-162000 exec mysql-859648c796-fmp7t -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-162000 exec mysql-859648c796-fmp7t -- mysql -ppassword -e "show databases;": exit status 1 (119.413985ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-162000 exec mysql-859648c796-fmp7t -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-162000 exec mysql-859648c796-fmp7t -- mysql -ppassword -e "show databases;": exit status 1 (196.148287ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-162000 exec mysql-859648c796-fmp7t -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2046/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo cat /etc/test/nested/copy/2046/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2046.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/2046.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2046.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo cat /usr/share/ca-certificates/2046.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/20462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/20462.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/20462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo cat /usr/share/ca-certificates/20462.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-162000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 ssh "sudo systemctl is-active crio": exit status 1 (477.619701ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-amd64 license: (1.627587221s)
--- PASS: TestFunctional/parallel/License (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-162000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-162000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-162000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-162000 image ls --format short --alsologtostderr:
I0319 12:17:05.047891    4710 out.go:291] Setting OutFile to fd 1 ...
I0319 12:17:05.048582    4710 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.048591    4710 out.go:304] Setting ErrFile to fd 2...
I0319 12:17:05.048597    4710 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.049033    4710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
I0319 12:17:05.049992    4710 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.050090    4710 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.050445    4710 cli_runner.go:164] Run: docker container inspect functional-162000 --format={{.State.Status}}
I0319 12:17:05.107912    4710 ssh_runner.go:195] Run: systemctl --version
I0319 12:17:05.108005    4710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-162000
I0319 12:17:05.164100    4710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50083 SSHKeyPath:/Users/jenkins/minikube-integration/18453-925/.minikube/machines/functional-162000/id_rsa Username:docker}
I0319 12:17:05.257605    4710 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-162000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.29.3           | 39f995c9f1996 | 127MB  |
| registry.k8s.io/kube-scheduler              | v1.29.3           | 8c390d98f50c0 | 59.6MB |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-162000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-162000 | 9ecf1d2a29df5 | 30B    |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.29.3           | 6052a25da3f97 | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.29.3           | a1d263b5dc5b0 | 82.4MB |
| docker.io/library/nginx                     | latest            | 92b11f67642b6 | 187MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-162000 image ls --format table --alsologtostderr:
I0319 12:17:05.723820    4738 out.go:291] Setting OutFile to fd 1 ...
I0319 12:17:05.724142    4738 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.724148    4738 out.go:304] Setting ErrFile to fd 2...
I0319 12:17:05.724152    4738 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.724358    4738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
I0319 12:17:05.725705    4738 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.726096    4738 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.726594    4738 cli_runner.go:164] Run: docker container inspect functional-162000 --format={{.State.Status}}
I0319 12:17:05.783703    4738 ssh_runner.go:195] Run: systemctl --version
I0319 12:17:05.783792    4738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-162000
I0319 12:17:05.838682    4738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50083 SSHKeyPath:/Users/jenkins/minikube-integration/18453-925/.minikube/machines/functional-162000/id_rsa Username:docker}
I0319 12:17:05.933550    4738 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-162000 image ls --format json --alsologtostderr:
[{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"59600000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"6052a25da3f97387a8a5a9711fb
ff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"122000000"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"127000000"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/l
ibrary/nginx:alpine"],"size":"42600000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"9ecf1d2a29df5fcf9c7b88cde3e5ac35f1cb0488eb26204f70d2c0073116cc49","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-162000"],"size":"30"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"82400000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id"
:"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-162000"],"size":"32900000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-162000 image ls --format json --alsologtostderr:
I0319 12:17:05.395785    4723 out.go:291] Setting OutFile to fd 1 ...
I0319 12:17:05.396042    4723 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.396048    4723 out.go:304] Setting ErrFile to fd 2...
I0319 12:17:05.396052    4723 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.396220    4723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
I0319 12:17:05.396817    4723 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.396911    4723 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.397271    4723 cli_runner.go:164] Run: docker container inspect functional-162000 --format={{.State.Status}}
I0319 12:17:05.454798    4723 ssh_runner.go:195] Run: systemctl --version
I0319 12:17:05.454872    4723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-162000
I0319 12:17:05.512365    4723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50083 SSHKeyPath:/Users/jenkins/minikube-integration/18453-925/.minikube/machines/functional-162000/id_rsa Username:docker}
I0319 12:17:05.609615    4723 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-162000 image ls --format yaml --alsologtostderr:
- id: 9ecf1d2a29df5fcf9c7b88cde3e5ac35f1cb0488eb26204f70d2c0073116cc49
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-162000
size: "30"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "122000000"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59600000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-162000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "82400000"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "127000000"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-162000 image ls --format yaml --alsologtostderr:
I0319 12:17:05.053532    4711 out.go:291] Setting OutFile to fd 1 ...
I0319 12:17:05.053817    4711 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.053823    4711 out.go:304] Setting ErrFile to fd 2...
I0319 12:17:05.053827    4711 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.054013    4711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
I0319 12:17:05.054661    4711 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.054759    4711 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.055152    4711 cli_runner.go:164] Run: docker container inspect functional-162000 --format={{.State.Status}}
I0319 12:17:05.110717    4711 ssh_runner.go:195] Run: systemctl --version
I0319 12:17:05.110785    4711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-162000
I0319 12:17:05.166365    4711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50083 SSHKeyPath:/Users/jenkins/minikube-integration/18453-925/.minikube/machines/functional-162000/id_rsa Username:docker}
I0319 12:17:05.258480    4711 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 ssh pgrep buildkitd: exit status 1 (400.469413ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image build -t localhost/my-image:functional-162000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 image build -t localhost/my-image:functional-162000 testdata/build --alsologtostderr: (4.953764291s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-162000 image build -t localhost/my-image:functional-162000 testdata/build --alsologtostderr:
I0319 12:17:05.774355    4739 out.go:291] Setting OutFile to fd 1 ...
I0319 12:17:05.774765    4739 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.774771    4739 out.go:304] Setting ErrFile to fd 2...
I0319 12:17:05.774775    4739 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0319 12:17:05.774962    4739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
I0319 12:17:05.775545    4739 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.776269    4739 config.go:182] Loaded profile config "functional-162000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0319 12:17:05.776800    4739 cli_runner.go:164] Run: docker container inspect functional-162000 --format={{.State.Status}}
I0319 12:17:05.832316    4739 ssh_runner.go:195] Run: systemctl --version
I0319 12:17:05.832401    4739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-162000
I0319 12:17:05.888253    4739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50083 SSHKeyPath:/Users/jenkins/minikube-integration/18453-925/.minikube/machines/functional-162000/id_rsa Username:docker}
I0319 12:17:05.979636    4739 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3929511354.tar
I0319 12:17:05.979731    4739 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0319 12:17:05.996585    4739 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3929511354.tar
I0319 12:17:06.001636    4739 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3929511354.tar: stat -c "%s %y" /var/lib/minikube/build/build.3929511354.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3929511354.tar': No such file or directory
I0319 12:17:06.001673    4739 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3929511354.tar --> /var/lib/minikube/build/build.3929511354.tar (3072 bytes)
I0319 12:17:06.042319    4739 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3929511354
I0319 12:17:06.058238    4739 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3929511354 -xf /var/lib/minikube/build/build.3929511354.tar
I0319 12:17:06.073648    4739 docker.go:360] Building image: /var/lib/minikube/build/build.3929511354
I0319 12:17:06.073742    4739 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-162000 /var/lib/minikube/build/build.3929511354
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c69ceeaf4a6ec7083f374c0da5992b69671ca08f04b6631b8d9ca180230517d7 done
#8 naming to localhost/my-image:functional-162000 done
#8 DONE 0.0s
I0319 12:17:10.597976    4739 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-162000 /var/lib/minikube/build/build.3929511354: (4.524328313s)
I0319 12:17:10.598046    4739 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3929511354
I0319 12:17:10.613481    4739 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3929511354.tar
I0319 12:17:10.628831    4739 build_images.go:217] Built localhost/my-image:functional-162000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3929511354.tar
I0319 12:17:10.628860    4739 build_images.go:133] succeeded building to: functional-162000
I0319 12:17:10.628865    4739 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.637866409s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-162000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.70s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-162000 docker-env) && out/minikube-darwin-amd64 status -p functional-162000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-162000 docker-env) && out/minikube-darwin-amd64 status -p functional-162000": (1.22084039s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-162000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 update-context --alsologtostderr -v=2
2024/03/19 12:17:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr: (4.78377602s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr: (2.547007603s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.433354495s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-162000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
E0319 12:15:40.443833    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 image load --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr: (4.160613175s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image save gcr.io/google-containers/addon-resizer:functional-162000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 image save gcr.io/google-containers/addon-resizer:functional-162000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.563313937s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image rm gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.941037207s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-162000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 image save --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 image save --daemon gcr.io/google-containers/addon-resizer:functional-162000 --alsologtostderr: (1.201303945s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-162000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-162000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-162000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-9ddzv" [4e0d3591-a20a-4c8f-ac88-9bbf0ed5e675] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-9ddzv" [4e0d3591-a20a-4c8f-ac88-9bbf0ed5e675] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.004273673s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-162000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-162000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-162000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4134: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-162000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-162000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-162000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ae3a9c4b-1a0a-450b-aaac-a9f34a7c9ec1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ae3a9c4b-1a0a-450b-aaac-a9f34a7c9ec1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004800535s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 service list -o json
functional_test.go:1490: Took "642.555527ms" to run "out/minikube-darwin-amd64 -p functional-162000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 service --namespace=default --https --url hello-node: signal: killed (15.002235255s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50364

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:50364
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-162000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-162000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4163: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 service hello-node --url --format={{.IP}}
E0319 12:16:21.403290    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 service hello-node --url --format={{.IP}}: signal: killed (15.002025985s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 service hello-node --url: signal: killed (15.003661863s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50408

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:50408
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "480.00449ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "86.685456ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "470.57398ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "88.304837ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1003491750/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710875806143535000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1003491750/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710875806143535000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1003491750/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710875806143535000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1003491750/001/test-1710875806143535000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (443.348175ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 19 19:16 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 19 19:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 19 19:16 test-1710875806143535000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh cat /mount-9p/test-1710875806143535000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-162000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [296b203a-89c8-4705-ba51-97d508e78934] Pending
helpers_test.go:344: "busybox-mount" [296b203a-89c8-4705-ba51-97d508e78934] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [296b203a-89c8-4705-ba51-97d508e78934] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [296b203a-89c8-4705-ba51-97d508e78934] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004112449s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-162000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1003491750/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port528369521/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (432.357436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port528369521/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 ssh "sudo umount -f /mount-9p": exit status 1 (424.035033ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-162000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port528369521/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2299641550/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2299641550/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2299641550/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T" /mount1: exit status 1 (597.052306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T" /mount1: (1.285096842s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-162000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-162000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2299641550/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2299641550/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-162000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2299641550/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.08s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-162000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-162000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-162000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-479000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0319 12:17:43.323696    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-479000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m40.228719882s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr: (1.214255418s)
--- PASS: TestMultiControlPlane/serial/StartCluster (101.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-479000 -- rollout status deployment/busybox: (6.810661377s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-27lbm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-b8b2w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-ggwdd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-27lbm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-b8b2w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-ggwdd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-27lbm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-b8b2w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-ggwdd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-27lbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-27lbm -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-b8b2w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-b8b2w -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-ggwdd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-479000 -- exec busybox-7fdf7869d9-ggwdd -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-479000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-479000 -v=7 --alsologtostderr: (19.246861389s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr: (1.561618759s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-479000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.20663569s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (26.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 status --output json -v=7 --alsologtostderr: (1.481135284s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp testdata/cp-test.txt ha-479000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1860619857/001/cp-test_ha-479000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000:/home/docker/cp-test.txt ha-479000-m02:/home/docker/cp-test_ha-479000_ha-479000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m02 "sudo cat /home/docker/cp-test_ha-479000_ha-479000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000:/home/docker/cp-test.txt ha-479000-m03:/home/docker/cp-test_ha-479000_ha-479000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m03 "sudo cat /home/docker/cp-test_ha-479000_ha-479000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000:/home/docker/cp-test.txt ha-479000-m04:/home/docker/cp-test_ha-479000_ha-479000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m04 "sudo cat /home/docker/cp-test_ha-479000_ha-479000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp testdata/cp-test.txt ha-479000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1860619857/001/cp-test_ha-479000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m02:/home/docker/cp-test.txt ha-479000:/home/docker/cp-test_ha-479000-m02_ha-479000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000 "sudo cat /home/docker/cp-test_ha-479000-m02_ha-479000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m02:/home/docker/cp-test.txt ha-479000-m03:/home/docker/cp-test_ha-479000-m02_ha-479000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m03 "sudo cat /home/docker/cp-test_ha-479000-m02_ha-479000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m02:/home/docker/cp-test.txt ha-479000-m04:/home/docker/cp-test_ha-479000-m02_ha-479000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m04 "sudo cat /home/docker/cp-test_ha-479000-m02_ha-479000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp testdata/cp-test.txt ha-479000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1860619857/001/cp-test_ha-479000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m03:/home/docker/cp-test.txt ha-479000:/home/docker/cp-test_ha-479000-m03_ha-479000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000 "sudo cat /home/docker/cp-test_ha-479000-m03_ha-479000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m03:/home/docker/cp-test.txt ha-479000-m02:/home/docker/cp-test_ha-479000-m03_ha-479000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m02 "sudo cat /home/docker/cp-test_ha-479000-m03_ha-479000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m03:/home/docker/cp-test.txt ha-479000-m04:/home/docker/cp-test_ha-479000-m03_ha-479000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m04 "sudo cat /home/docker/cp-test_ha-479000-m03_ha-479000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp testdata/cp-test.txt ha-479000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1860619857/001/cp-test_ha-479000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m04:/home/docker/cp-test.txt ha-479000:/home/docker/cp-test_ha-479000-m04_ha-479000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000 "sudo cat /home/docker/cp-test_ha-479000-m04_ha-479000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m04:/home/docker/cp-test.txt ha-479000-m02:/home/docker/cp-test_ha-479000-m04_ha-479000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m02 "sudo cat /home/docker/cp-test_ha-479000-m04_ha-479000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 cp ha-479000-m04:/home/docker/cp-test.txt ha-479000-m03:/home/docker/cp-test_ha-479000-m04_ha-479000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 ssh -n ha-479000-m03 "sudo cat /home/docker/cp-test_ha-479000-m04_ha-479000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (26.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 node stop m02 -v=7 --alsologtostderr
E0319 12:19:59.502066    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 node stop m02 -v=7 --alsologtostderr: (10.92931303s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr: exit status 7 (1.122879971s)

                                                
                                                
-- stdout --
	ha-479000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-479000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-479000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-479000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 12:20:06.386060    5997 out.go:291] Setting OutFile to fd 1 ...
	I0319 12:20:06.386347    5997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:20:06.386352    5997 out.go:304] Setting ErrFile to fd 2...
	I0319 12:20:06.386356    5997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:20:06.386536    5997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 12:20:06.386727    5997 out.go:298] Setting JSON to false
	I0319 12:20:06.386755    5997 mustload.go:65] Loading cluster: ha-479000
	I0319 12:20:06.386800    5997 notify.go:220] Checking for updates...
	I0319 12:20:06.387076    5997 config.go:182] Loaded profile config "ha-479000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 12:20:06.387092    5997 status.go:255] checking status of ha-479000 ...
	I0319 12:20:06.387536    5997 cli_runner.go:164] Run: docker container inspect ha-479000 --format={{.State.Status}}
	I0319 12:20:06.441916    5997 status.go:330] ha-479000 host status = "Running" (err=<nil>)
	I0319 12:20:06.441952    5997 host.go:66] Checking if "ha-479000" exists ...
	I0319 12:20:06.442202    5997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-479000
	I0319 12:20:06.493187    5997 host.go:66] Checking if "ha-479000" exists ...
	I0319 12:20:06.493479    5997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 12:20:06.493550    5997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-479000
	I0319 12:20:06.546513    5997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50546 SSHKeyPath:/Users/jenkins/minikube-integration/18453-925/.minikube/machines/ha-479000/id_rsa Username:docker}
	I0319 12:20:06.642795    5997 ssh_runner.go:195] Run: systemctl --version
	I0319 12:20:06.647796    5997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 12:20:06.664729    5997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-479000
	I0319 12:20:06.717478    5997 kubeconfig.go:125] found "ha-479000" server: "https://127.0.0.1:50545"
	I0319 12:20:06.717507    5997 api_server.go:166] Checking apiserver status ...
	I0319 12:20:06.717543    5997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 12:20:06.734946    5997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2470/cgroup
	W0319 12:20:06.750537    5997 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2470/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 12:20:06.750597    5997 ssh_runner.go:195] Run: ls
	I0319 12:20:06.754881    5997 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50545/healthz ...
	I0319 12:20:06.759395    5997 api_server.go:279] https://127.0.0.1:50545/healthz returned 200:
	ok
	I0319 12:20:06.759410    5997 status.go:422] ha-479000 apiserver status = Running (err=<nil>)
	I0319 12:20:06.759424    5997 status.go:257] ha-479000 status: &{Name:ha-479000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 12:20:06.759438    5997 status.go:255] checking status of ha-479000-m02 ...
	I0319 12:20:06.759694    5997 cli_runner.go:164] Run: docker container inspect ha-479000-m02 --format={{.State.Status}}
	I0319 12:20:06.811012    5997 status.go:330] ha-479000-m02 host status = "Stopped" (err=<nil>)
	I0319 12:20:06.811036    5997 status.go:343] host is not running, skipping remaining checks
	I0319 12:20:06.811047    5997 status.go:257] ha-479000-m02 status: &{Name:ha-479000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 12:20:06.811062    5997 status.go:255] checking status of ha-479000-m03 ...
	I0319 12:20:06.811331    5997 cli_runner.go:164] Run: docker container inspect ha-479000-m03 --format={{.State.Status}}
	I0319 12:20:06.862528    5997 status.go:330] ha-479000-m03 host status = "Running" (err=<nil>)
	I0319 12:20:06.862569    5997 host.go:66] Checking if "ha-479000-m03" exists ...
	I0319 12:20:06.862823    5997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-479000-m03
	I0319 12:20:06.913839    5997 host.go:66] Checking if "ha-479000-m03" exists ...
	I0319 12:20:06.914108    5997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 12:20:06.914156    5997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-479000-m03
	I0319 12:20:06.965915    5997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50648 SSHKeyPath:/Users/jenkins/minikube-integration/18453-925/.minikube/machines/ha-479000-m03/id_rsa Username:docker}
	I0319 12:20:07.061620    5997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 12:20:07.078373    5997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-479000
	I0319 12:20:07.131892    5997 kubeconfig.go:125] found "ha-479000" server: "https://127.0.0.1:50545"
	I0319 12:20:07.131914    5997 api_server.go:166] Checking apiserver status ...
	I0319 12:20:07.131950    5997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0319 12:20:07.148720    5997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2282/cgroup
	W0319 12:20:07.164707    5997 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2282/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0319 12:20:07.164768    5997 ssh_runner.go:195] Run: ls
	I0319 12:20:07.169490    5997 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50545/healthz ...
	I0319 12:20:07.173721    5997 api_server.go:279] https://127.0.0.1:50545/healthz returned 200:
	ok
	I0319 12:20:07.173738    5997 status.go:422] ha-479000-m03 apiserver status = Running (err=<nil>)
	I0319 12:20:07.173747    5997 status.go:257] ha-479000-m03 status: &{Name:ha-479000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 12:20:07.173757    5997 status.go:255] checking status of ha-479000-m04 ...
	I0319 12:20:07.174005    5997 cli_runner.go:164] Run: docker container inspect ha-479000-m04 --format={{.State.Status}}
	I0319 12:20:07.226212    5997 status.go:330] ha-479000-m04 host status = "Running" (err=<nil>)
	I0319 12:20:07.226239    5997 host.go:66] Checking if "ha-479000-m04" exists ...
	I0319 12:20:07.226494    5997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-479000-m04
	I0319 12:20:07.278833    5997 host.go:66] Checking if "ha-479000-m04" exists ...
	I0319 12:20:07.279079    5997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0319 12:20:07.279127    5997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-479000-m04
	I0319 12:20:07.334967    5997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50773 SSHKeyPath:/Users/jenkins/minikube-integration/18453-925/.minikube/machines/ha-479000-m04/id_rsa Username:docker}
	I0319 12:20:07.427853    5997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0319 12:20:07.444743    5997 status.go:257] ha-479000-m04 status: &{Name:ha-479000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 node start m02 -v=7 --alsologtostderr
E0319 12:20:25.765194    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:25.771151    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:25.781340    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:25.802063    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:25.842416    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:25.923527    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:26.083970    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:26.404107    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:27.044301    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:27.193363    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:20:28.324909    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:30.885214    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:36.005516    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:20:46.245579    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 node start m02 -v=7 --alsologtostderr: (48.506344423s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr: (1.502515055s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (50.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.190164064s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (181.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-479000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-479000 -v=7 --alsologtostderr
E0319 12:21:06.725607    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-479000 -v=7 --alsologtostderr: (34.091239147s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-479000 --wait=true -v=7 --alsologtostderr
E0319 12:21:47.685260    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:23:09.604462    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-479000 --wait=true -v=7 --alsologtostderr: (2m27.170629489s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-479000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (181.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 node delete m03 -v=7 --alsologtostderr: (11.083746433s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr: (1.087328055s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 stop -v=7 --alsologtostderr: (32.833146773s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr: exit status 7 (217.83754ms)

                                                
                                                
-- stdout --
	ha-479000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-479000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-479000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0319 12:24:47.051914    6667 out.go:291] Setting OutFile to fd 1 ...
	I0319 12:24:47.052099    6667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:24:47.052105    6667 out.go:304] Setting ErrFile to fd 2...
	I0319 12:24:47.052108    6667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0319 12:24:47.053006    6667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18453-925/.minikube/bin
	I0319 12:24:47.053450    6667 out.go:298] Setting JSON to false
	I0319 12:24:47.053474    6667 mustload.go:65] Loading cluster: ha-479000
	I0319 12:24:47.053514    6667 notify.go:220] Checking for updates...
	I0319 12:24:47.053759    6667 config.go:182] Loaded profile config "ha-479000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0319 12:24:47.053774    6667 status.go:255] checking status of ha-479000 ...
	I0319 12:24:47.054142    6667 cli_runner.go:164] Run: docker container inspect ha-479000 --format={{.State.Status}}
	I0319 12:24:47.103967    6667 status.go:330] ha-479000 host status = "Stopped" (err=<nil>)
	I0319 12:24:47.104006    6667 status.go:343] host is not running, skipping remaining checks
	I0319 12:24:47.104013    6667 status.go:257] ha-479000 status: &{Name:ha-479000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 12:24:47.104044    6667 status.go:255] checking status of ha-479000-m02 ...
	I0319 12:24:47.104312    6667 cli_runner.go:164] Run: docker container inspect ha-479000-m02 --format={{.State.Status}}
	I0319 12:24:47.154723    6667 status.go:330] ha-479000-m02 host status = "Stopped" (err=<nil>)
	I0319 12:24:47.154746    6667 status.go:343] host is not running, skipping remaining checks
	I0319 12:24:47.154756    6667 status.go:257] ha-479000-m02 status: &{Name:ha-479000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0319 12:24:47.154781    6667 status.go:255] checking status of ha-479000-m04 ...
	I0319 12:24:47.155098    6667 cli_runner.go:164] Run: docker container inspect ha-479000-m04 --format={{.State.Status}}
	I0319 12:24:47.206446    6667 status.go:330] ha-479000-m04 host status = "Stopped" (err=<nil>)
	I0319 12:24:47.206473    6667 status.go:343] host is not running, skipping remaining checks
	I0319 12:24:47.206485    6667 status.go:257] ha-479000-m04 status: &{Name:ha-479000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (164.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-479000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0319 12:24:59.495101    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
E0319 12:25:25.756603    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
E0319 12:25:53.440865    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-479000 --wait=true -v=7 --alsologtostderr --driver=docker : (2m43.294279614s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr: (1.097561657s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (164.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (40.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-479000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-479000 --control-plane -v=7 --alsologtostderr: (39.437699466s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-479000 status -v=7 --alsologtostderr: (1.473699089s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (40.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.189831558s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.19s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-409000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-409000 --driver=docker : (22.078592831s)
--- PASS: TestImageBuild/serial/Setup (22.08s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-409000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-409000: (4.944804442s)
--- PASS: TestImageBuild/serial/NormalBuild (4.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-409000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-409000: (1.213701541s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.21s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-409000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-409000: (1.0522842s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-409000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-409000: (1.091380667s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-481000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-481000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (39.577619011s)
--- PASS: TestJSONOutput/start/Command (39.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-481000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-481000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-481000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-481000 --output=json --user=testUser: (10.853180022s)
--- PASS: TestJSONOutput/stop/Command (10.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-794000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-794000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (390.459925ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d0c72472-210e-4b97-870b-66364f1bdf34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-794000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"af1063bb-65c2-49a8-8e78-e65a63e46e5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18453"}}
	{"specversion":"1.0","id":"e45efef1-f0bd-4619-acaa-7e720228a064","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig"}}
	{"specversion":"1.0","id":"3003c617-05fc-4b47-901c-ea3319c1a50f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"3da775a7-90b8-403f-934a-2d6d84e33c13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8bf956c0-30bf-4ec4-800a-fa08916226fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18453-925/.minikube"}}
	{"specversion":"1.0","id":"4ac173c5-509d-4e20-9c7d-559b677039e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"50c0fcc8-f20a-470f-85a0-cb5900c574f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-794000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-794000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-233000 --network=
E0319 12:29:59.488429    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-233000 --network=: (22.085249749s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-233000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-233000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-233000: (2.471101573s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.61s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-149000 --network=bridge
E0319 12:30:25.749773    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-149000 --network=bridge: (21.452612267s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-149000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-149000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-149000: (2.270252782s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.78s)

                                                
                                    
x
+
TestKicExistingNetwork (26.88s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-521000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-521000 --network=existing-network: (24.11441099s)
helpers_test.go:175: Cleaning up "existing-network-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-521000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-521000: (2.274425604s)
--- PASS: TestKicExistingNetwork (26.88s)

                                                
                                    
x
+
TestKicCustomSubnet (24.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-153000 --subnet=192.168.60.0/24
E0319 12:31:22.538466    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-153000 --subnet=192.168.60.0/24: (21.723970334s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-153000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-153000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-153000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-153000: (2.441682525s)
--- PASS: TestKicCustomSubnet (24.22s)

                                                
                                    
x
+
TestKicStaticIP (24.94s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-061000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-061000 --static-ip=192.168.200.200: (22.445423739s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-061000 ip
helpers_test.go:175: Cleaning up "static-ip-061000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-061000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-061000: (2.251120573s)
--- PASS: TestKicStaticIP (24.94s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (49.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-058000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-058000 --driver=docker : (21.342345352s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-062000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-062000 --driver=docker : (21.615994443s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-058000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-062000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-062000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-062000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-062000: (2.268143964s)
helpers_test.go:175: Cleaning up "first-058000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-058000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-058000: (2.432241365s)
--- PASS: TestMinikubeProfile (49.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-432000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-432000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.411915253s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.41s)

                                                
                                    
x
+
TestPreload (141.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-582000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0319 13:19:59.671927    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/addons-353000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-582000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m28.441233917s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-582000 image pull gcr.io/k8s-minikube/busybox
E0319 13:20:25.934697    2046 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18453-925/.minikube/profiles/functional-162000/client.crt: no such file or directory
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-582000 image pull gcr.io/k8s-minikube/busybox: (5.406423225s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-582000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-582000: (10.777874281s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-582000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-582000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (34.08973758s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-582000 image list
helpers_test.go:175: Cleaning up "test-preload-582000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-582000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-582000: (2.494773968s)
--- PASS: TestPreload (141.57s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (20.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18453
- KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1358648355/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1358648355/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1358648355/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1358648355/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (20.29s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (22.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=18453
- KUBECONFIG=/Users/jenkins/minikube-integration/18453-925/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3376577451/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3376577451/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3376577451/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3376577451/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (22.03s)

                                                
                                    

Test skip (19/209)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 13.827707ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vrcq6" [05465253-aeb9-4cc8-a14b-b59a66d306c8] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014248272s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nwhnm" [63a4a883-b9ce-4a65-96ab-7c22d0dc61de] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004773194s
addons_test.go:340: (dbg) Run:  kubectl --context addons-353000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-353000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-353000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.585576857s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (17.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-353000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-353000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-353000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a272fa85-df8c-4c2d-9813-70024caa4378] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a272fa85-df8c-4c2d-9813-70024caa4378] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004638115s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-353000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.81s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-162000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-162000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-k8whh" [99a02234-62da-4dbd-ae21-f4342283af1f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-k8whh" [99a02234-62da-4dbd-ae21-f4342283af1f] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005456698s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard