Test Report: Docker_macOS 18634

                    
                      743ee2f6c19b1c9aeee0e19f36a4d6af542f1699:2024-04-15:34041
                    
                

Test fail (22/211)

x
+
TestOffline (756s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-347000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-347000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m35.079649483s)

                                                
                                                
-- stdout --
	* [offline-docker-347000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-347000" primary control-plane node in "offline-docker-347000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-347000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 12:06:07.312201   18258 out.go:291] Setting OutFile to fd 1 ...
	I0415 12:06:07.312400   18258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:06:07.312409   18258 out.go:304] Setting ErrFile to fd 2...
	I0415 12:06:07.312415   18258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:06:07.312642   18258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 12:06:07.314222   18258 out.go:298] Setting JSON to false
	I0415 12:06:07.338561   18258 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7538,"bootTime":1713200429,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 12:06:07.338663   18258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 12:06:07.360332   18258 out.go:177] * [offline-docker-347000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 12:06:07.402116   18258 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 12:06:07.402126   18258 notify.go:220] Checking for updates...
	I0415 12:06:07.444045   18258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 12:06:07.465083   18258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 12:06:07.486208   18258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 12:06:07.507118   18258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	I0415 12:06:07.528057   18258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 12:06:07.549477   18258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 12:06:07.605659   18258 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 12:06:07.605832   18258 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 12:06:07.766169   18258 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-15 19:06:07.721418087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 12:06:07.827183   18258 out.go:177] * Using the docker driver based on user configuration
	I0415 12:06:07.848179   18258 start.go:297] selected driver: docker
	I0415 12:06:07.848199   18258 start.go:901] validating driver "docker" against <nil>
	I0415 12:06:07.848209   18258 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 12:06:07.850925   18258 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 12:06:07.955207   18258 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:false NGoroutines:195 SystemTime:2024-04-15 19:06:07.944492065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 12:06:07.955373   18258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 12:06:07.955565   18258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 12:06:07.976018   18258 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 12:06:07.997339   18258 cni.go:84] Creating CNI manager for ""
	I0415 12:06:07.997380   18258 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 12:06:07.997393   18258 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 12:06:07.997514   18258 start.go:340] cluster config:
	{Name:offline-docker-347000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 12:06:08.019476   18258 out.go:177] * Starting "offline-docker-347000" primary control-plane node in "offline-docker-347000" cluster
	I0415 12:06:08.061410   18258 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 12:06:08.103263   18258 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 12:06:08.166082   18258 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:06:08.166142   18258 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 12:06:08.166154   18258 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 12:06:08.166173   18258 cache.go:56] Caching tarball of preloaded images
	I0415 12:06:08.166488   18258 preload.go:173] Found /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 12:06:08.166514   18258 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 12:06:08.167988   18258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/offline-docker-347000/config.json ...
	I0415 12:06:08.168077   18258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/offline-docker-347000/config.json: {Name:mkd3328b4692cf9099b4b0c6ccbd77bcd00d05ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 12:06:08.218026   18258 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 12:06:08.218048   18258 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 12:06:08.218070   18258 cache.go:194] Successfully downloaded all kic artifacts
	I0415 12:06:08.218108   18258 start.go:360] acquireMachinesLock for offline-docker-347000: {Name:mk1621ad47faf4042b7fac7d76cb301c1f0b88ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 12:06:08.218264   18258 start.go:364] duration metric: took 144.413µs to acquireMachinesLock for "offline-docker-347000"
	I0415 12:06:08.218293   18258 start.go:93] Provisioning new machine with config: &{Name:offline-docker-347000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-347000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 12:06:08.218384   18258 start.go:125] createHost starting for "" (driver="docker")
	I0415 12:06:08.260949   18258 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 12:06:08.261148   18258 start.go:159] libmachine.API.Create for "offline-docker-347000" (driver="docker")
	I0415 12:06:08.261172   18258 client.go:168] LocalClient.Create starting
	I0415 12:06:08.261303   18258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 12:06:08.261355   18258 main.go:141] libmachine: Decoding PEM data...
	I0415 12:06:08.261372   18258 main.go:141] libmachine: Parsing certificate...
	I0415 12:06:08.261449   18258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 12:06:08.261485   18258 main.go:141] libmachine: Decoding PEM data...
	I0415 12:06:08.261492   18258 main.go:141] libmachine: Parsing certificate...
	I0415 12:06:08.262087   18258 cli_runner.go:164] Run: docker network inspect offline-docker-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 12:06:08.373628   18258 cli_runner.go:211] docker network inspect offline-docker-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 12:06:08.373764   18258 network_create.go:281] running [docker network inspect offline-docker-347000] to gather additional debugging logs...
	I0415 12:06:08.373781   18258 cli_runner.go:164] Run: docker network inspect offline-docker-347000
	W0415 12:06:08.425284   18258 cli_runner.go:211] docker network inspect offline-docker-347000 returned with exit code 1
	I0415 12:06:08.425325   18258 network_create.go:284] error running [docker network inspect offline-docker-347000]: docker network inspect offline-docker-347000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-347000 not found
	I0415 12:06:08.425336   18258 network_create.go:286] output of [docker network inspect offline-docker-347000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-347000 not found
	
	** /stderr **
	I0415 12:06:08.425445   18258 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:06:08.526161   18258 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:06:08.527803   18258 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:06:08.528175   18258 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021755a0}
	I0415 12:06:08.528190   18258 network_create.go:124] attempt to create docker network offline-docker-347000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 12:06:08.528258   18258 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-347000 offline-docker-347000
	I0415 12:06:08.614936   18258 network_create.go:108] docker network offline-docker-347000 192.168.67.0/24 created
	I0415 12:06:08.614973   18258 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-347000" container
	I0415 12:06:08.615105   18258 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 12:06:08.666077   18258 cli_runner.go:164] Run: docker volume create offline-docker-347000 --label name.minikube.sigs.k8s.io=offline-docker-347000 --label created_by.minikube.sigs.k8s.io=true
	I0415 12:06:08.718106   18258 oci.go:103] Successfully created a docker volume offline-docker-347000
	I0415 12:06:08.718233   18258 cli_runner.go:164] Run: docker run --rm --name offline-docker-347000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-347000 --entrypoint /usr/bin/test -v offline-docker-347000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 12:06:09.199928   18258 oci.go:107] Successfully prepared a docker volume offline-docker-347000
	I0415 12:06:09.199966   18258 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:06:09.199979   18258 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 12:06:09.200093   18258 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-347000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 12:12:08.322264   18258 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:12:08.322398   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:12:08.376092   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:12:08.376213   18258 retry.go:31] will retry after 361.652365ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:08.738335   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:12:08.790908   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:12:08.791015   18258 retry.go:31] will retry after 405.88219ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:09.198239   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:12:09.248165   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:12:09.248293   18258 retry.go:31] will retry after 390.274267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:09.640360   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:12:09.693976   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	W0415 12:12:09.694079   18258 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	
	W0415 12:12:09.694102   18258 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:09.694158   18258 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:12:09.694214   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:12:09.743218   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:12:09.743313   18258 retry.go:31] will retry after 265.236154ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:10.010937   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:12:10.063998   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:12:10.064101   18258 retry.go:31] will retry after 496.519946ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:10.562547   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:12:10.614377   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:12:10.614467   18258 retry.go:31] will retry after 388.333075ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:11.004145   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:12:11.056512   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:12:11.056607   18258 retry.go:31] will retry after 625.294151ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:11.682475   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:12:11.733744   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	W0415 12:12:11.733855   18258 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	
	W0415 12:12:11.733874   18258 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:11.733898   18258 start.go:128] duration metric: took 6m3.456777555s to createHost
	I0415 12:12:11.733905   18258 start.go:83] releasing machines lock for "offline-docker-347000", held for 6m3.456907986s
	W0415 12:12:11.733920   18258 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 12:12:11.734355   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:11.783736   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:11.783799   18258 delete.go:82] Unable to get host status for offline-docker-347000, assuming it has already been deleted: state: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	W0415 12:12:11.783898   18258 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 12:12:11.783908   18258 start.go:728] Will try again in 5 seconds ...
	I0415 12:12:16.784547   18258 start.go:360] acquireMachinesLock for offline-docker-347000: {Name:mk1621ad47faf4042b7fac7d76cb301c1f0b88ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 12:12:16.784749   18258 start.go:364] duration metric: took 152.452µs to acquireMachinesLock for "offline-docker-347000"
	I0415 12:12:16.784789   18258 start.go:96] Skipping create...Using existing machine configuration
	I0415 12:12:16.784805   18258 fix.go:54] fixHost starting: 
	I0415 12:12:16.785203   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:16.836122   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:16.836176   18258 fix.go:112] recreateIfNeeded on offline-docker-347000: state= err=unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:16.836202   18258 fix.go:117] machineExists: false. err=machine does not exist
	I0415 12:12:16.858158   18258 out.go:177] * docker "offline-docker-347000" container is missing, will recreate.
	I0415 12:12:16.881567   18258 delete.go:124] DEMOLISHING offline-docker-347000 ...
	I0415 12:12:16.881750   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:16.932309   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	W0415 12:12:16.932369   18258 stop.go:83] unable to get state: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:16.932387   18258 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:16.932760   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:16.981891   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:16.981958   18258 delete.go:82] Unable to get host status for offline-docker-347000, assuming it has already been deleted: state: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:16.982041   18258 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-347000
	W0415 12:12:17.031021   18258 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-347000 returned with exit code 1
	I0415 12:12:17.031060   18258 kic.go:371] could not find the container offline-docker-347000 to remove it. will try anyways
	I0415 12:12:17.031132   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:17.080581   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	W0415 12:12:17.080626   18258 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:17.080703   18258 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-347000 /bin/bash -c "sudo init 0"
	W0415 12:12:17.129718   18258 cli_runner.go:211] docker exec --privileged -t offline-docker-347000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 12:12:17.129753   18258 oci.go:650] error shutdown offline-docker-347000: docker exec --privileged -t offline-docker-347000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:18.130472   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:18.183425   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:18.183489   18258 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:18.183503   18258 oci.go:664] temporary error: container offline-docker-347000 status is  but expect it to be exited
	I0415 12:12:18.183526   18258 retry.go:31] will retry after 331.72907ms: couldn't verify container is exited. %v: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:18.517670   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:18.572042   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:18.572095   18258 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:18.572107   18258 oci.go:664] temporary error: container offline-docker-347000 status is  but expect it to be exited
	I0415 12:12:18.572129   18258 retry.go:31] will retry after 759.413946ms: couldn't verify container is exited. %v: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:19.332878   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:19.386335   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:19.386380   18258 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:19.386396   18258 oci.go:664] temporary error: container offline-docker-347000 status is  but expect it to be exited
	I0415 12:12:19.386419   18258 retry.go:31] will retry after 1.63937689s: couldn't verify container is exited. %v: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:21.026874   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:21.080438   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:21.080490   18258 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:21.080499   18258 oci.go:664] temporary error: container offline-docker-347000 status is  but expect it to be exited
	I0415 12:12:21.080523   18258 retry.go:31] will retry after 1.532372752s: couldn't verify container is exited. %v: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:22.613329   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:22.667237   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:22.667283   18258 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:22.667295   18258 oci.go:664] temporary error: container offline-docker-347000 status is  but expect it to be exited
	I0415 12:12:22.667330   18258 retry.go:31] will retry after 1.906634493s: couldn't verify container is exited. %v: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:24.575511   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:24.627965   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:24.628009   18258 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:24.628025   18258 oci.go:664] temporary error: container offline-docker-347000 status is  but expect it to be exited
	I0415 12:12:24.628050   18258 retry.go:31] will retry after 5.348110928s: couldn't verify container is exited. %v: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:29.977010   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:30.028105   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:30.028151   18258 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:30.028162   18258 oci.go:664] temporary error: container offline-docker-347000 status is  but expect it to be exited
	I0415 12:12:30.028190   18258 retry.go:31] will retry after 4.698559321s: couldn't verify container is exited. %v: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:34.727644   18258 cli_runner.go:164] Run: docker container inspect offline-docker-347000 --format={{.State.Status}}
	W0415 12:12:34.780481   18258 cli_runner.go:211] docker container inspect offline-docker-347000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:34.780528   18258 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:12:34.780539   18258 oci.go:664] temporary error: container offline-docker-347000 status is  but expect it to be exited
	I0415 12:12:34.780575   18258 oci.go:88] couldn't shut down offline-docker-347000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	 
	I0415 12:12:34.780651   18258 cli_runner.go:164] Run: docker rm -f -v offline-docker-347000
	I0415 12:12:34.835550   18258 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-347000
	W0415 12:12:34.884962   18258 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-347000 returned with exit code 1
	I0415 12:12:34.885070   18258 cli_runner.go:164] Run: docker network inspect offline-docker-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:12:34.934761   18258 cli_runner.go:164] Run: docker network rm offline-docker-347000
	I0415 12:12:35.044596   18258 fix.go:124] Sleeping 1 second for extra luck!
	I0415 12:12:36.045268   18258 start.go:125] createHost starting for "" (driver="docker")
	I0415 12:12:36.067555   18258 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 12:12:36.067711   18258 start.go:159] libmachine.API.Create for "offline-docker-347000" (driver="docker")
	I0415 12:12:36.067739   18258 client.go:168] LocalClient.Create starting
	I0415 12:12:36.067905   18258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 12:12:36.067982   18258 main.go:141] libmachine: Decoding PEM data...
	I0415 12:12:36.068003   18258 main.go:141] libmachine: Parsing certificate...
	I0415 12:12:36.068060   18258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 12:12:36.068118   18258 main.go:141] libmachine: Decoding PEM data...
	I0415 12:12:36.068130   18258 main.go:141] libmachine: Parsing certificate...
	I0415 12:12:36.068652   18258 cli_runner.go:164] Run: docker network inspect offline-docker-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 12:12:36.120533   18258 cli_runner.go:211] docker network inspect offline-docker-347000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 12:12:36.120623   18258 network_create.go:281] running [docker network inspect offline-docker-347000] to gather additional debugging logs...
	I0415 12:12:36.120644   18258 cli_runner.go:164] Run: docker network inspect offline-docker-347000
	W0415 12:12:36.170386   18258 cli_runner.go:211] docker network inspect offline-docker-347000 returned with exit code 1
	I0415 12:12:36.170418   18258 network_create.go:284] error running [docker network inspect offline-docker-347000]: docker network inspect offline-docker-347000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-347000 not found
	I0415 12:12:36.170442   18258 network_create.go:286] output of [docker network inspect offline-docker-347000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-347000 not found
	
	** /stderr **
	I0415 12:12:36.170559   18258 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:12:36.221423   18258 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:12:36.222991   18258 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:12:36.224550   18258 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:12:36.225978   18258 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:12:36.227567   18258 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:12:36.228013   18258 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020ef1f0}
	I0415 12:12:36.228025   18258 network_create.go:124] attempt to create docker network offline-docker-347000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0415 12:12:36.228098   18258 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-347000 offline-docker-347000
	I0415 12:12:36.313702   18258 network_create.go:108] docker network offline-docker-347000 192.168.94.0/24 created
	I0415 12:12:36.313740   18258 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-347000" container
	I0415 12:12:36.313849   18258 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 12:12:36.407251   18258 cli_runner.go:164] Run: docker volume create offline-docker-347000 --label name.minikube.sigs.k8s.io=offline-docker-347000 --label created_by.minikube.sigs.k8s.io=true
	I0415 12:12:36.457007   18258 oci.go:103] Successfully created a docker volume offline-docker-347000
	I0415 12:12:36.457128   18258 cli_runner.go:164] Run: docker run --rm --name offline-docker-347000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-347000 --entrypoint /usr/bin/test -v offline-docker-347000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 12:12:36.773713   18258 oci.go:107] Successfully prepared a docker volume offline-docker-347000
	I0415 12:12:36.773749   18258 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:12:36.773762   18258 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 12:12:36.773854   18258 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-347000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 12:18:36.070566   18258 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:18:36.070665   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:36.123317   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:36.123432   18258 retry.go:31] will retry after 347.860936ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:36.473680   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:36.526302   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:36.526416   18258 retry.go:31] will retry after 314.522081ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:36.841330   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:36.894609   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:36.894708   18258 retry.go:31] will retry after 511.009818ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:37.408160   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:37.460337   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	W0415 12:18:37.460446   18258 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	
	W0415 12:18:37.460474   18258 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:37.460527   18258 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:18:37.460582   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:37.511907   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:37.512005   18258 retry.go:31] will retry after 360.207447ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:37.874437   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:37.925877   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:37.925974   18258 retry.go:31] will retry after 486.091429ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:38.413767   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:38.466599   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:38.466697   18258 retry.go:31] will retry after 533.90554ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:39.000921   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:39.053565   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	W0415 12:18:39.053683   18258 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	
	W0415 12:18:39.053704   18258 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:39.053724   18258 start.go:128] duration metric: took 6m3.006971904s to createHost
	I0415 12:18:39.053796   18258 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:18:39.053854   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:39.104724   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:39.104817   18258 retry.go:31] will retry after 321.054247ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:39.427853   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:39.479240   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:39.479340   18258 retry.go:31] will retry after 376.897416ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:39.858602   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:39.913116   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:39.913216   18258 retry.go:31] will retry after 399.133259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:40.314386   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:40.366535   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	W0415 12:18:40.366635   18258 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	
	W0415 12:18:40.366659   18258 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:40.366715   18258 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:18:40.366769   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:40.416918   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:40.417009   18258 retry.go:31] will retry after 354.022866ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:40.772799   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:40.824329   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:40.824424   18258 retry.go:31] will retry after 500.041156ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:41.325758   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:41.379594   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	I0415 12:18:41.379691   18258 retry.go:31] will retry after 812.854794ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:42.192829   18258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000
	W0415 12:18:42.243417   18258 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000 returned with exit code 1
	W0415 12:18:42.243523   18258 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	
	W0415 12:18:42.243540   18258 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-347000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-347000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000
	I0415 12:18:42.243557   18258 fix.go:56] duration metric: took 6m25.457213889s for fixHost
	I0415 12:18:42.243563   18258 start.go:83] releasing machines lock for "offline-docker-347000", held for 6m25.45725996s
	W0415 12:18:42.243642   18258 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-347000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-347000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 12:18:42.285944   18258 out.go:177] 
	W0415 12:18:42.307112   18258 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 12:18:42.307163   18258 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 12:18:42.307200   18258 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 12:18:42.328004   18258 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-347000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-04-15 12:18:42.402497 -0700 PDT m=+6127.972198673
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-347000
helpers_test.go:235: (dbg) docker inspect offline-docker-347000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-347000",
	        "Id": "95e23eb36d9ad934dc42248800ebd190432635493bdd7cf9b2b3f67c5abeb61f",
	        "Created": "2024-04-15T19:12:36.274706345Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-347000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-347000 -n offline-docker-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-347000 -n offline-docker-347000: exit status 7 (112.912501ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 12:18:42.567929   18827 status.go:249] status error: host: state: unknown state "offline-docker-347000": docker container inspect offline-docker-347000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-347000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-347000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-347000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-347000
--- FAIL: TestOffline (756.00s)

                                                
                                    
x
+
TestCertOptions (7201.321s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-097000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0415 12:34:42.397489    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 12:34:42.953914    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (5m15s)
	TestCertOptions (4m35s)
	TestNetworkPlugins (30m27s)

                                                
                                                
goroutine 2508 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0000da1a0, 0xc0012e3bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000774018, {0x8cd4f20, 0x2a, 0x2a}, {0x4985bc5?, 0x64164e8?, 0x8cf72c0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006c7ae0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006c7ae0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00063fb80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 642 [syscall, 4 minutes]:
syscall.syscall6(0xc00284ff80?, 0x1000000000010?, 0x10000000019?, 0x505200f8?, 0x90?, 0x95d5108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0022f78a0?, 0x48c6165?, 0x90?, 0x7932960?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x49f6f05?, 0xc0022f78d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000a44d20)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002852580)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002852580)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002be0000, 0xc002852580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc002be0000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc002be0000, 0x79c2b78)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 46 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 45
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 643 [syscall, 5 minutes]:
syscall.syscall6(0xc002415f80?, 0x1000000000010?, 0x10000000019?, 0x50431bb8?, 0x90?, 0x95d55b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0020f1a40?, 0x48c6165?, 0x90?, 0x7932960?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x49f6f05?, 0xc0020f1a74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002b28540)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0008a7a20)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0008a7a20)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002be0340, 0xc0008a7a20)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc002be0340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc002be0340, 0x79c2b70)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2504 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0008a7a20, 0xc002a78600)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 643
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1136 [select, 111 minutes]:
net/http.(*persistConn).readLoop(0xc0008fe480)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1157
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2204 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00246f040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00246f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00246f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc00246f040, 0x79c2c20)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2120 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002be1040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002be1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc002be1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc002be1040, 0x79c2c60)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2211 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002be1520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002be1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002be1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002be1520, 0xc002858180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2209
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 189 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0008bcfc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 177
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 190 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008c0e40, 0xc0000664e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 177
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 193 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008c0dd0, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x74dd060?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0008bcea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008c0e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0022e1120, {0x79ced80, 0xc002083980}, 0x1, 0xc0000664e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0022e1120, 0x3b9aca00, 0x0, 0x1, 0xc0000664e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 190
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 194 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x79f16f0, 0xc0000664e0}, 0xc000116f50, 0xc0020caf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x79f16f0, 0xc0000664e0}, 0x0?, 0xc000116f50, 0xc000116f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x79f16f0?, 0xc0000664e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 190
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 195 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 194
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2521 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x50080de8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002938b40?, 0xc0024b7296?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002938b40, {0xc0024b7296, 0x56a, 0x56a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002898240, {0xc0024b7296?, 0xc0022361c0?, 0x22c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00284e7b0, {0x79cd788, 0xc00221a110})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x79cd8c8, 0xc00284e7b0}, {0x79cd788, 0xc00221a110}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000093678?, {0x79cd8c8, 0xc00284e7b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x8c97190?, {0x79cd8c8?, 0xc00284e7b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x79cd8c8, 0xc00284e7b0}, {0x79cd848, 0xc002898240}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002a782a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 642
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 714 [IO wait, 115 minutes]:
internal/poll.runtime_pollWait(0x50081988, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002bcb280?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc002bcb280)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc002bcb280)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0007cc060)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0007cc060)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0008bea50, {0x79e5080, 0xc0007cc060})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0008bea50)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc002be1a00?, 0xc002be1d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 711
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 878 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x79f16f0, 0xc0000664e0}, 0xc00232a750, 0xc00241af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x79f16f0, 0xc0000664e0}, 0x0?, 0xc00232a750, 0xc00232a798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x79f16f0?, 0xc0000664e0?}, 0xc002be1a00?, 0x49f9bc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00232a7d0?, 0x4a3fec4?, 0xc002be1d40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 896
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2502 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x50081798, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002856c00?, 0xc002258a9f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002856c00, {0xc002258a9f, 0x561, 0x561})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00221a1d0, {0xc002258a9f?, 0xc002237180?, 0x235?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002414ab0, {0x79cd788, 0xc002898090})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x79cd8c8, 0xc002414ab0}, {0x79cd788, 0xc002898090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00232de78?, {0x79cd8c8, 0xc002414ab0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x8c97190?, {0x79cd8c8?, 0xc002414ab0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x79cd8c8, 0xc002414ab0}, {0x79cd848, 0xc00221a1d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002a784e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 643
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1134 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc002aacc60, 0xc0029904e0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 809
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2119 [chan receive, 31 minutes]:
testing.(*T).Run(0xc002be0b60, {0x63be109?, 0x5ef09fda9d9?}, 0xc0024461e0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002be0b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002be0b60, 0x79c2c58)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2503 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x50080ee0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002856cc0?, 0xc0020b4200?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002856cc0, {0xc0020b4200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00221a1e8, {0xc0020b4200?, 0xc002181c00?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002414ae0, {0x79cd788, 0xc0028980a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x79cd8c8, 0xc002414ae0}, {0x79cd788, 0xc0028980a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc002329678?, {0x79cd8c8, 0xc002414ae0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x8c97190?, {0x79cd8c8?, 0xc002414ae0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x79cd8c8, 0xc002414ae0}, {0x79cd848, 0xc00221a1e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002a781e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 643
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 879 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 878
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2202 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00246ed00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00246ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc00246ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc00246ed00, 0x79c2c80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1169 [select, 111 minutes]:
net/http.(*persistConn).writeLoop(0xc0008fe480)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1157
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2185 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00246e000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00246e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc00246e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc00246e000, 0x79c2ca0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1803 [syscall, 97 minutes]:
syscall.syscall(0x0?, 0xc002ac0fc0?, 0xc0023276f0?, 0x496605d?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc002a551d0?, 0x1?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1816
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 1106 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc002a5e840, 0xc002a0b080)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1105
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2209 [chan receive, 31 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc002be01a0, 0xc0024461e0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2119
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2210 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002be1380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002be1380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002be1380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002be1380, 0xc002858080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2209
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2523 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc002852580, 0xc002990420)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 642
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2214 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000da4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000da4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000da4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000da4e0, 0xc002858300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2209
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2212 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002be16c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002be16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002be16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002be16c0, 0xc002858200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2209
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2205 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00246f1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00246f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc00246f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc00246f1e0, 0x79c2c38)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2522 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x50080bf8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002938c00?, 0xc00003b800?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002938c00, {0xc00003b800, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002898260, {0xc00003b800?, 0x50716ad8?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00284e7e0, {0x79cd788, 0xc00221a120})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x79cd8c8, 0xc00284e7e0}, {0x79cd788, 0xc00221a120}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc002a0aa18?, {0x79cd8c8, 0xc00284e7e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x8c97190?, {0x79cd8c8?, 0xc00284e7e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x79cd8c8, 0xc00284e7e0}, {0x79cd848, 0xc002898260}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 642
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 892 [chan send, 113 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a69b80, 0xc002b88480)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 891
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2213 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002be1ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002be1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002be1ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002be1ba0, 0xc002858280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2209
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2215 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000da680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000da680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000da680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000da680, 0xc002858380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2209
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2121 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002be11e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002be11e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc002be11e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc002be11e0, 0x79c2c70)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2217 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000dbba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000dbba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000dbba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000dbba0, 0xc002858480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2209
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2216 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000da820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000da820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000da820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0000da820, 0xc002858400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2209
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2218 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00219e000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00219e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00219e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00219e000, 0xc002858500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2209
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 877 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000929dd0, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x74dd060?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002085d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000929e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006b29c0, {0x79ced80, 0xc0020830b0}, 0x1, 0xc0000664e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b29c0, 0x3b9aca00, 0x0, 0x1, 0xc0000664e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 896
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2203 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc0007d00a0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00246eea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00246eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc00246eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc00246eea0, 0x79c2ca8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 895 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002085e60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 894
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 896 [chan receive, 113 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000929e00, 0xc0000664e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 894
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1062 [chan send, 113 minutes]:
os/exec.(*Cmd).watchCtx(0xc00296d8c0, 0xc002a0a360)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1061
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                    
x
+
TestDockerFlags (759s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-608000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0415 12:19:42.358087    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 12:19:42.914284    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 12:24:26.119974    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 12:24:42.359618    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 12:24:42.915043    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 12:29:25.416284    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 12:29:42.361382    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 12:29:42.916370    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-608000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m37.658744263s)

                                                
                                                
-- stdout --
	* [docker-flags-608000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-608000" primary control-plane node in "docker-flags-608000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-608000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 12:19:20.529978   18973 out.go:291] Setting OutFile to fd 1 ...
	I0415 12:19:20.530246   18973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:19:20.530252   18973 out.go:304] Setting ErrFile to fd 2...
	I0415 12:19:20.530256   18973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:19:20.530412   18973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 12:19:20.531946   18973 out.go:298] Setting JSON to false
	I0415 12:19:20.554197   18973 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8331,"bootTime":1713200429,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 12:19:20.554300   18973 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 12:19:20.576259   18973 out.go:177] * [docker-flags-608000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 12:19:20.618232   18973 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 12:19:20.618295   18973 notify.go:220] Checking for updates...
	I0415 12:19:20.639977   18973 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 12:19:20.660923   18973 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 12:19:20.681951   18973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 12:19:20.702893   18973 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	I0415 12:19:20.724073   18973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 12:19:20.746035   18973 config.go:182] Loaded profile config "force-systemd-flag-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 12:19:20.746183   18973 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 12:19:20.801997   18973 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 12:19:20.802169   18973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 12:19:20.902428   18973 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:112 OomKillDisable:false NGoroutines:245 SystemTime:2024-04-15 19:19:20.892169254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined n
ame=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docke
r Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBO
M) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 12:19:20.945349   18973 out.go:177] * Using the docker driver based on user configuration
	I0415 12:19:20.966354   18973 start.go:297] selected driver: docker
	I0415 12:19:20.966382   18973 start.go:901] validating driver "docker" against <nil>
	I0415 12:19:20.966396   18973 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 12:19:20.970805   18973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 12:19:21.069605   18973 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:112 OomKillDisable:false NGoroutines:245 SystemTime:2024-04-15 19:19:21.060114026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined n
ame=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docke
r Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBO
M) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 12:19:21.069775   18973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 12:19:21.069955   18973 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0415 12:19:21.091425   18973 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 12:19:21.112483   18973 cni.go:84] Creating CNI manager for ""
	I0415 12:19:21.112526   18973 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 12:19:21.112541   18973 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 12:19:21.112663   18973 start.go:340] cluster config:
	{Name:docker-flags-608000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 12:19:21.134460   18973 out.go:177] * Starting "docker-flags-608000" primary control-plane node in "docker-flags-608000" cluster
	I0415 12:19:21.176402   18973 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 12:19:21.197245   18973 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 12:19:21.239399   18973 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:19:21.239457   18973 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 12:19:21.239477   18973 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 12:19:21.239506   18973 cache.go:56] Caching tarball of preloaded images
	I0415 12:19:21.239762   18973 preload.go:173] Found /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 12:19:21.239783   18973 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 12:19:21.240705   18973 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/docker-flags-608000/config.json ...
	I0415 12:19:21.240913   18973 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/docker-flags-608000/config.json: {Name:mk76603b5a498ddd92d0d2ba1b5017d785e3c955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 12:19:21.292463   18973 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 12:19:21.292492   18973 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 12:19:21.292515   18973 cache.go:194] Successfully downloaded all kic artifacts
	I0415 12:19:21.292572   18973 start.go:360] acquireMachinesLock for docker-flags-608000: {Name:mk6e7a31829391b0c3f622c91ac5c9721e433b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 12:19:21.292730   18973 start.go:364] duration metric: took 144.075µs to acquireMachinesLock for "docker-flags-608000"
	I0415 12:19:21.292758   18973 start.go:93] Provisioning new machine with config: &{Name:docker-flags-608000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-608000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 12:19:21.292852   18973 start.go:125] createHost starting for "" (driver="docker")
	I0415 12:19:21.335316   18973 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 12:19:21.335721   18973 start.go:159] libmachine.API.Create for "docker-flags-608000" (driver="docker")
	I0415 12:19:21.335768   18973 client.go:168] LocalClient.Create starting
	I0415 12:19:21.336011   18973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 12:19:21.336108   18973 main.go:141] libmachine: Decoding PEM data...
	I0415 12:19:21.336142   18973 main.go:141] libmachine: Parsing certificate...
	I0415 12:19:21.336225   18973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 12:19:21.336312   18973 main.go:141] libmachine: Decoding PEM data...
	I0415 12:19:21.336327   18973 main.go:141] libmachine: Parsing certificate...
	I0415 12:19:21.337202   18973 cli_runner.go:164] Run: docker network inspect docker-flags-608000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 12:19:21.387545   18973 cli_runner.go:211] docker network inspect docker-flags-608000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 12:19:21.387657   18973 network_create.go:281] running [docker network inspect docker-flags-608000] to gather additional debugging logs...
	I0415 12:19:21.387673   18973 cli_runner.go:164] Run: docker network inspect docker-flags-608000
	W0415 12:19:21.436975   18973 cli_runner.go:211] docker network inspect docker-flags-608000 returned with exit code 1
	I0415 12:19:21.437002   18973 network_create.go:284] error running [docker network inspect docker-flags-608000]: docker network inspect docker-flags-608000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-608000 not found
	I0415 12:19:21.437022   18973 network_create.go:286] output of [docker network inspect docker-flags-608000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-608000 not found
	
	** /stderr **
	I0415 12:19:21.437192   18973 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:19:21.487884   18973 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:19:21.489505   18973 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:19:21.491063   18973 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:19:21.491404   18973 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a7410}
	I0415 12:19:21.491420   18973 network_create.go:124] attempt to create docker network docker-flags-608000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 12:19:21.491494   18973 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-608000 docker-flags-608000
	W0415 12:19:21.540661   18973 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-608000 docker-flags-608000 returned with exit code 1
	W0415 12:19:21.540693   18973 network_create.go:149] failed to create docker network docker-flags-608000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-608000 docker-flags-608000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 12:19:21.540715   18973 network_create.go:116] failed to create docker network docker-flags-608000 192.168.76.0/24, will retry: subnet is taken
	I0415 12:19:21.542135   18973 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:19:21.542531   18973 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002360f20}
	I0415 12:19:21.542549   18973 network_create.go:124] attempt to create docker network docker-flags-608000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 12:19:21.542624   18973 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-608000 docker-flags-608000
	I0415 12:19:21.627891   18973 network_create.go:108] docker network docker-flags-608000 192.168.85.0/24 created
	I0415 12:19:21.627931   18973 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-608000" container
	I0415 12:19:21.628034   18973 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 12:19:21.679629   18973 cli_runner.go:164] Run: docker volume create docker-flags-608000 --label name.minikube.sigs.k8s.io=docker-flags-608000 --label created_by.minikube.sigs.k8s.io=true
	I0415 12:19:21.730190   18973 oci.go:103] Successfully created a docker volume docker-flags-608000
	I0415 12:19:21.730322   18973 cli_runner.go:164] Run: docker run --rm --name docker-flags-608000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-608000 --entrypoint /usr/bin/test -v docker-flags-608000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 12:19:22.091955   18973 oci.go:107] Successfully prepared a docker volume docker-flags-608000
	I0415 12:19:22.092009   18973 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:19:22.092032   18973 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 12:19:22.092132   18973 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-608000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 12:25:21.339578   18973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:25:21.339731   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:25:21.392538   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:25:21.392659   18973 retry.go:31] will retry after 362.785562ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:21.756381   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:25:21.810602   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:25:21.810707   18973 retry.go:31] will retry after 333.387604ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:22.146391   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:25:22.198629   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:25:22.198724   18973 retry.go:31] will retry after 675.340563ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:22.874508   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:25:22.926324   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	W0415 12:25:22.926427   18973 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	
	W0415 12:25:22.926453   18973 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:22.926513   18973 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:25:22.926566   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:25:22.975874   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:25:22.975974   18973 retry.go:31] will retry after 251.623514ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:23.229258   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:25:23.281693   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:25:23.281786   18973 retry.go:31] will retry after 497.19811ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:23.779760   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:25:23.831222   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:25:23.831326   18973 retry.go:31] will retry after 810.493441ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:24.643247   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:25:24.696194   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	W0415 12:25:24.696288   18973 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	
	W0415 12:25:24.696310   18973 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:24.696328   18973 start.go:128] duration metric: took 6m3.40201276s to createHost
	I0415 12:25:24.696335   18973 start.go:83] releasing machines lock for "docker-flags-608000", held for 6m3.402143708s
	W0415 12:25:24.696351   18973 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 12:25:24.696790   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:24.745133   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:24.745185   18973 delete.go:82] Unable to get host status for docker-flags-608000, assuming it has already been deleted: state: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	W0415 12:25:24.745267   18973 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 12:25:24.745279   18973 start.go:728] Will try again in 5 seconds ...
	I0415 12:25:29.747092   18973 start.go:360] acquireMachinesLock for docker-flags-608000: {Name:mk6e7a31829391b0c3f622c91ac5c9721e433b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 12:25:29.747438   18973 start.go:364] duration metric: took 194.509µs to acquireMachinesLock for "docker-flags-608000"
	I0415 12:25:29.747482   18973 start.go:96] Skipping create...Using existing machine configuration
	I0415 12:25:29.747500   18973 fix.go:54] fixHost starting: 
	I0415 12:25:29.748033   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:29.800564   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:29.800614   18973 fix.go:112] recreateIfNeeded on docker-flags-608000: state= err=unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:29.800629   18973 fix.go:117] machineExists: false. err=machine does not exist
	I0415 12:25:29.822394   18973 out.go:177] * docker "docker-flags-608000" container is missing, will recreate.
	I0415 12:25:29.865115   18973 delete.go:124] DEMOLISHING docker-flags-608000 ...
	I0415 12:25:29.865300   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:29.915484   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	W0415 12:25:29.915543   18973 stop.go:83] unable to get state: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:29.915565   18973 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:29.915956   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:29.965898   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:29.965957   18973 delete.go:82] Unable to get host status for docker-flags-608000, assuming it has already been deleted: state: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:29.966048   18973 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-608000
	W0415 12:25:30.015136   18973 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-608000 returned with exit code 1
	I0415 12:25:30.015176   18973 kic.go:371] could not find the container docker-flags-608000 to remove it. will try anyways
	I0415 12:25:30.015258   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:30.064142   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	W0415 12:25:30.064187   18973 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:30.064265   18973 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-608000 /bin/bash -c "sudo init 0"
	W0415 12:25:30.113393   18973 cli_runner.go:211] docker exec --privileged -t docker-flags-608000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 12:25:30.113424   18973 oci.go:650] error shutdown docker-flags-608000: docker exec --privileged -t docker-flags-608000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:31.115866   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:31.168263   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:31.168311   18973 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:31.168323   18973 oci.go:664] temporary error: container docker-flags-608000 status is  but expect it to be exited
	I0415 12:25:31.168347   18973 retry.go:31] will retry after 686.223319ms: couldn't verify container is exited. %v: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:31.855185   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:31.908443   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:31.908502   18973 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:31.908510   18973 oci.go:664] temporary error: container docker-flags-608000 status is  but expect it to be exited
	I0415 12:25:31.908534   18973 retry.go:31] will retry after 883.747171ms: couldn't verify container is exited. %v: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:32.792611   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:32.844087   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:32.844135   18973 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:32.844146   18973 oci.go:664] temporary error: container docker-flags-608000 status is  but expect it to be exited
	I0415 12:25:32.844173   18973 retry.go:31] will retry after 1.496984405s: couldn't verify container is exited. %v: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:34.342334   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:34.393192   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:34.393241   18973 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:34.393252   18973 oci.go:664] temporary error: container docker-flags-608000 status is  but expect it to be exited
	I0415 12:25:34.393276   18973 retry.go:31] will retry after 2.325146543s: couldn't verify container is exited. %v: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:36.718810   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:36.772599   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:36.772652   18973 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:36.772664   18973 oci.go:664] temporary error: container docker-flags-608000 status is  but expect it to be exited
	I0415 12:25:36.772690   18973 retry.go:31] will retry after 1.511178443s: couldn't verify container is exited. %v: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:38.284562   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:38.336025   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:38.336074   18973 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:38.336082   18973 oci.go:664] temporary error: container docker-flags-608000 status is  but expect it to be exited
	I0415 12:25:38.336103   18973 retry.go:31] will retry after 2.025524579s: couldn't verify container is exited. %v: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:40.363411   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:40.415686   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:40.415734   18973 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:40.415744   18973 oci.go:664] temporary error: container docker-flags-608000 status is  but expect it to be exited
	I0415 12:25:40.415769   18973 retry.go:31] will retry after 4.0635282s: couldn't verify container is exited. %v: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:44.480248   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:44.531965   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:44.532014   18973 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:44.532022   18973 oci.go:664] temporary error: container docker-flags-608000 status is  but expect it to be exited
	I0415 12:25:44.532044   18973 retry.go:31] will retry after 5.928333995s: couldn't verify container is exited. %v: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:50.461494   18973 cli_runner.go:164] Run: docker container inspect docker-flags-608000 --format={{.State.Status}}
	W0415 12:25:50.513587   18973 cli_runner.go:211] docker container inspect docker-flags-608000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:50.513632   18973 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:25:50.513643   18973 oci.go:664] temporary error: container docker-flags-608000 status is  but expect it to be exited
	I0415 12:25:50.513674   18973 oci.go:88] couldn't shut down docker-flags-608000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	 
	I0415 12:25:50.513749   18973 cli_runner.go:164] Run: docker rm -f -v docker-flags-608000
	I0415 12:25:50.563024   18973 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-608000
	W0415 12:25:50.612016   18973 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-608000 returned with exit code 1
	I0415 12:25:50.612117   18973 cli_runner.go:164] Run: docker network inspect docker-flags-608000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:25:50.662412   18973 cli_runner.go:164] Run: docker network rm docker-flags-608000
	I0415 12:25:50.771242   18973 fix.go:124] Sleeping 1 second for extra luck!
	I0415 12:25:51.773406   18973 start.go:125] createHost starting for "" (driver="docker")
	I0415 12:25:51.795352   18973 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 12:25:51.795522   18973 start.go:159] libmachine.API.Create for "docker-flags-608000" (driver="docker")
	I0415 12:25:51.795552   18973 client.go:168] LocalClient.Create starting
	I0415 12:25:51.795768   18973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 12:25:51.795863   18973 main.go:141] libmachine: Decoding PEM data...
	I0415 12:25:51.795893   18973 main.go:141] libmachine: Parsing certificate...
	I0415 12:25:51.795974   18973 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 12:25:51.796053   18973 main.go:141] libmachine: Decoding PEM data...
	I0415 12:25:51.796067   18973 main.go:141] libmachine: Parsing certificate...
	I0415 12:25:51.796892   18973 cli_runner.go:164] Run: docker network inspect docker-flags-608000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 12:25:51.847237   18973 cli_runner.go:211] docker network inspect docker-flags-608000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 12:25:51.847325   18973 network_create.go:281] running [docker network inspect docker-flags-608000] to gather additional debugging logs...
	I0415 12:25:51.847345   18973 cli_runner.go:164] Run: docker network inspect docker-flags-608000
	W0415 12:25:51.896777   18973 cli_runner.go:211] docker network inspect docker-flags-608000 returned with exit code 1
	I0415 12:25:51.896807   18973 network_create.go:284] error running [docker network inspect docker-flags-608000]: docker network inspect docker-flags-608000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-608000 not found
	I0415 12:25:51.896821   18973 network_create.go:286] output of [docker network inspect docker-flags-608000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-608000 not found
	
	** /stderr **
	I0415 12:25:51.896967   18973 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:25:51.948386   18973 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:51.949862   18973 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:51.951185   18973 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:51.952735   18973 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:51.954136   18973 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:51.955700   18973 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:51.956032   18973 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021d9cc0}
	I0415 12:25:51.956044   18973 network_create.go:124] attempt to create docker network docker-flags-608000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0415 12:25:51.956106   18973 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-608000 docker-flags-608000
	I0415 12:25:52.041308   18973 network_create.go:108] docker network docker-flags-608000 192.168.103.0/24 created
	I0415 12:25:52.041360   18973 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-608000" container
	I0415 12:25:52.041482   18973 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 12:25:52.093228   18973 cli_runner.go:164] Run: docker volume create docker-flags-608000 --label name.minikube.sigs.k8s.io=docker-flags-608000 --label created_by.minikube.sigs.k8s.io=true
	I0415 12:25:52.141891   18973 oci.go:103] Successfully created a docker volume docker-flags-608000
	I0415 12:25:52.142015   18973 cli_runner.go:164] Run: docker run --rm --name docker-flags-608000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-608000 --entrypoint /usr/bin/test -v docker-flags-608000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 12:25:52.461270   18973 oci.go:107] Successfully prepared a docker volume docker-flags-608000
	I0415 12:25:52.461335   18973 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:25:52.461355   18973 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 12:25:52.461466   18973 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-608000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 12:31:51.834861   18973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:31:51.834986   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:51.885554   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:51.885668   18973 retry.go:31] will retry after 219.746589ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:52.106737   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:52.157941   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:52.158054   18973 retry.go:31] will retry after 194.951202ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:52.354198   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:52.403674   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:52.403770   18973 retry.go:31] will retry after 509.512055ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:52.914865   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:52.967557   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	W0415 12:31:52.967662   18973 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	
	W0415 12:31:52.967693   18973 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:52.967756   18973 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:31:52.967811   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:53.016649   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:53.016748   18973 retry.go:31] will retry after 281.842194ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:53.300969   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:53.353267   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:53.353361   18973 retry.go:31] will retry after 467.428624ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:53.821540   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:53.874720   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:53.874822   18973 retry.go:31] will retry after 735.993028ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:54.613223   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:54.665070   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	W0415 12:31:54.665175   18973 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	
	W0415 12:31:54.665195   18973 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:54.665206   18973 start.go:128] duration metric: took 6m2.854285453s to createHost
	I0415 12:31:54.665272   18973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:31:54.665330   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:54.714023   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:54.714121   18973 retry.go:31] will retry after 336.632391ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:55.053074   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:55.106239   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:55.106344   18973 retry.go:31] will retry after 494.302975ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:55.601724   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:55.654139   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:55.654237   18973 retry.go:31] will retry after 392.739208ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:56.049409   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:56.101686   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	W0415 12:31:56.101786   18973 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	
	W0415 12:31:56.101805   18973 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:56.101863   18973 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:31:56.101925   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:56.152049   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:56.152140   18973 retry.go:31] will retry after 201.204319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:56.355721   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:56.407694   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:56.407792   18973 retry.go:31] will retry after 337.375696ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:56.747593   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:56.800544   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:56.800639   18973 retry.go:31] will retry after 445.01327ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:57.246764   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:57.300035   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	I0415 12:31:57.300126   18973 retry.go:31] will retry after 640.11774ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:57.940944   18973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000
	W0415 12:31:57.993963   18973 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000 returned with exit code 1
	W0415 12:31:57.994065   18973 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	
	W0415 12:31:57.994084   18973 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-608000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-608000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	I0415 12:31:57.994089   18973 fix.go:56] duration metric: took 6m28.208929411s for fixHost
	I0415 12:31:57.994096   18973 start.go:83] releasing machines lock for "docker-flags-608000", held for 6m28.208975821s
	W0415 12:31:57.994170   18973 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-608000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-608000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 12:31:58.037832   18973 out.go:177] 
	W0415 12:31:58.058638   18973 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 12:31:58.058681   18973 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 12:31:58.058711   18973 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 12:31:58.101942   18973 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-608000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-608000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-608000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (204.532196ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-608000 host status: state: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-608000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-608000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-608000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (199.324432ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-608000 host status: state: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-608000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-608000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-15 12:31:58.583027 -0700 PDT m=+6924.113412720
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-608000
helpers_test.go:235: (dbg) docker inspect docker-flags-608000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-608000",
	        "Id": "b3ed06096e77bb20e8db63cc8755ba6b90cf395b6628e677c9e6418c2c458e2d",
	        "Created": "2024-04-15T19:25:52.00254567Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-608000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-608000 -n docker-flags-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-608000 -n docker-flags-608000: exit status 7 (112.690475ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 12:31:58.746833   19466 status.go:249] status error: host: state: unknown state "docker-flags-608000": docker container inspect docker-flags-608000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-608000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-608000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-608000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-608000
--- FAIL: TestDockerFlags (759.00s)

                                                
                                    
x
+
TestForceSystemdFlag (755.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-818000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-818000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.877083901s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-818000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-818000" primary control-plane node in "force-systemd-flag-818000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-818000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 12:18:43.365925   18851 out.go:291] Setting OutFile to fd 1 ...
	I0415 12:18:43.366183   18851 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:18:43.366189   18851 out.go:304] Setting ErrFile to fd 2...
	I0415 12:18:43.366192   18851 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:18:43.366352   18851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 12:18:43.367832   18851 out.go:298] Setting JSON to false
	I0415 12:18:43.390204   18851 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8294,"bootTime":1713200429,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 12:18:43.390301   18851 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 12:18:43.412466   18851 out.go:177] * [force-systemd-flag-818000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 12:18:43.456294   18851 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 12:18:43.456356   18851 notify.go:220] Checking for updates...
	I0415 12:18:43.498912   18851 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 12:18:43.520107   18851 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 12:18:43.540952   18851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 12:18:43.561953   18851 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	I0415 12:18:43.583226   18851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 12:18:43.604781   18851 config.go:182] Loaded profile config "force-systemd-env-370000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 12:18:43.604973   18851 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 12:18:43.661373   18851 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 12:18:43.661554   18851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 12:18:43.761351   18851 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:108 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-15 19:18:43.751211168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined n
ame=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docke
r Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBO
M) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 12:18:43.783245   18851 out.go:177] * Using the docker driver based on user configuration
	I0415 12:18:43.824678   18851 start.go:297] selected driver: docker
	I0415 12:18:43.824709   18851 start.go:901] validating driver "docker" against <nil>
	I0415 12:18:43.824723   18851 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 12:18:43.828902   18851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 12:18:43.927714   18851 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:108 OomKillDisable:false NGoroutines:235 SystemTime:2024-04-15 19:18:43.917926073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined n
ame=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docke
r Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBO
M) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 12:18:43.927902   18851 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 12:18:43.928077   18851 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 12:18:43.949903   18851 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 12:18:43.971822   18851 cni.go:84] Creating CNI manager for ""
	I0415 12:18:43.971867   18851 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 12:18:43.971884   18851 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 12:18:43.971993   18851 start.go:340] cluster config:
	{Name:force-systemd-flag-818000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-818000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 12:18:43.993426   18851 out.go:177] * Starting "force-systemd-flag-818000" primary control-plane node in "force-systemd-flag-818000" cluster
	I0415 12:18:44.035574   18851 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 12:18:44.056553   18851 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 12:18:44.098706   18851 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:18:44.098786   18851 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 12:18:44.098801   18851 cache.go:56] Caching tarball of preloaded images
	I0415 12:18:44.098811   18851 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 12:18:44.099055   18851 preload.go:173] Found /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 12:18:44.099075   18851 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 12:18:44.099191   18851 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/force-systemd-flag-818000/config.json ...
	I0415 12:18:44.099232   18851 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/force-systemd-flag-818000/config.json: {Name:mk05ac53460fb17390d9fc192cef9f575f768df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 12:18:44.150146   18851 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 12:18:44.150315   18851 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 12:18:44.150335   18851 cache.go:194] Successfully downloaded all kic artifacts
	I0415 12:18:44.150372   18851 start.go:360] acquireMachinesLock for force-systemd-flag-818000: {Name:mkca0640ea405db9dba55453c7350b53f973e79c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 12:18:44.150523   18851 start.go:364] duration metric: took 140.168µs to acquireMachinesLock for "force-systemd-flag-818000"
	I0415 12:18:44.150552   18851 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-818000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-818000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 12:18:44.150631   18851 start.go:125] createHost starting for "" (driver="docker")
	I0415 12:18:44.193711   18851 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 12:18:44.194116   18851 start.go:159] libmachine.API.Create for "force-systemd-flag-818000" (driver="docker")
	I0415 12:18:44.194158   18851 client.go:168] LocalClient.Create starting
	I0415 12:18:44.194401   18851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 12:18:44.194515   18851 main.go:141] libmachine: Decoding PEM data...
	I0415 12:18:44.194549   18851 main.go:141] libmachine: Parsing certificate...
	I0415 12:18:44.194668   18851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 12:18:44.194748   18851 main.go:141] libmachine: Decoding PEM data...
	I0415 12:18:44.194762   18851 main.go:141] libmachine: Parsing certificate...
	I0415 12:18:44.195610   18851 cli_runner.go:164] Run: docker network inspect force-systemd-flag-818000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 12:18:44.245707   18851 cli_runner.go:211] docker network inspect force-systemd-flag-818000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 12:18:44.245811   18851 network_create.go:281] running [docker network inspect force-systemd-flag-818000] to gather additional debugging logs...
	I0415 12:18:44.245830   18851 cli_runner.go:164] Run: docker network inspect force-systemd-flag-818000
	W0415 12:18:44.294909   18851 cli_runner.go:211] docker network inspect force-systemd-flag-818000 returned with exit code 1
	I0415 12:18:44.294941   18851 network_create.go:284] error running [docker network inspect force-systemd-flag-818000]: docker network inspect force-systemd-flag-818000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-818000 not found
	I0415 12:18:44.294956   18851 network_create.go:286] output of [docker network inspect force-systemd-flag-818000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-818000 not found
	
	** /stderr **
	I0415 12:18:44.295068   18851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:18:44.346112   18851 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:18:44.347559   18851 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:18:44.347927   18851 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022fc490}
	I0415 12:18:44.347945   18851 network_create.go:124] attempt to create docker network force-systemd-flag-818000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 12:18:44.348034   18851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-818000 force-systemd-flag-818000
	I0415 12:18:44.432798   18851 network_create.go:108] docker network force-systemd-flag-818000 192.168.67.0/24 created
	I0415 12:18:44.432837   18851 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-818000" container
	I0415 12:18:44.432953   18851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 12:18:44.483618   18851 cli_runner.go:164] Run: docker volume create force-systemd-flag-818000 --label name.minikube.sigs.k8s.io=force-systemd-flag-818000 --label created_by.minikube.sigs.k8s.io=true
	I0415 12:18:44.534992   18851 oci.go:103] Successfully created a docker volume force-systemd-flag-818000
	I0415 12:18:44.535115   18851 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-818000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-818000 --entrypoint /usr/bin/test -v force-systemd-flag-818000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 12:18:44.914998   18851 oci.go:107] Successfully prepared a docker volume force-systemd-flag-818000
	I0415 12:18:44.915045   18851 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:18:44.915060   18851 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 12:18:44.915178   18851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-818000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 12:24:44.196940   18851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:24:44.197144   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:44.248714   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:24:44.248829   18851 retry.go:31] will retry after 134.607543ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:44.385375   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:44.436122   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:24:44.436212   18851 retry.go:31] will retry after 406.015524ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:44.844611   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:44.896820   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:24:44.896925   18851 retry.go:31] will retry after 291.544898ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:45.189799   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:45.240842   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:24:45.240946   18851 retry.go:31] will retry after 459.402057ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:45.701097   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:45.754472   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	W0415 12:24:45.754593   18851 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	
	W0415 12:24:45.754614   18851 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:45.754679   18851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:24:45.754748   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:45.805226   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:24:45.805317   18851 retry.go:31] will retry after 203.274217ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:46.009478   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:46.063154   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:24:46.063250   18851 retry.go:31] will retry after 283.683612ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:46.348689   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:46.412093   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:24:46.412205   18851 retry.go:31] will retry after 413.661311ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:46.827453   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:46.880203   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:24:46.880293   18851 retry.go:31] will retry after 781.872703ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:47.663026   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:24:47.715558   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	W0415 12:24:47.715662   18851 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	
	W0415 12:24:47.715676   18851 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:47.715692   18851 start.go:128] duration metric: took 6m3.563596163s to createHost
	I0415 12:24:47.715700   18851 start.go:83] releasing machines lock for "force-systemd-flag-818000", held for 6m3.563715042s
	W0415 12:24:47.715717   18851 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 12:24:47.716141   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:47.765048   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:24:47.765108   18851 delete.go:82] Unable to get host status for force-systemd-flag-818000, assuming it has already been deleted: state: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	W0415 12:24:47.765198   18851 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 12:24:47.765206   18851 start.go:728] Will try again in 5 seconds ...
	I0415 12:24:52.765961   18851 start.go:360] acquireMachinesLock for force-systemd-flag-818000: {Name:mkca0640ea405db9dba55453c7350b53f973e79c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 12:24:52.766172   18851 start.go:364] duration metric: took 162.427µs to acquireMachinesLock for "force-systemd-flag-818000"
	I0415 12:24:52.766212   18851 start.go:96] Skipping create...Using existing machine configuration
	I0415 12:24:52.766229   18851 fix.go:54] fixHost starting: 
	I0415 12:24:52.766644   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:52.818643   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:24:52.818692   18851 fix.go:112] recreateIfNeeded on force-systemd-flag-818000: state= err=unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:52.818710   18851 fix.go:117] machineExists: false. err=machine does not exist
	I0415 12:24:52.862162   18851 out.go:177] * docker "force-systemd-flag-818000" container is missing, will recreate.
	I0415 12:24:52.883264   18851 delete.go:124] DEMOLISHING force-systemd-flag-818000 ...
	I0415 12:24:52.883478   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:52.933989   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	W0415 12:24:52.934042   18851 stop.go:83] unable to get state: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:52.934057   18851 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:52.934417   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:52.983515   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:24:52.983568   18851 delete.go:82] Unable to get host status for force-systemd-flag-818000, assuming it has already been deleted: state: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:52.983656   18851 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-818000
	W0415 12:24:53.032901   18851 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-818000 returned with exit code 1
	I0415 12:24:53.032943   18851 kic.go:371] could not find the container force-systemd-flag-818000 to remove it. will try anyways
	I0415 12:24:53.033014   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:53.082845   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	W0415 12:24:53.082895   18851 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:53.082979   18851 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-818000 /bin/bash -c "sudo init 0"
	W0415 12:24:53.132287   18851 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-818000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 12:24:53.132319   18851 oci.go:650] error shutdown force-systemd-flag-818000: docker exec --privileged -t force-systemd-flag-818000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:54.134166   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:54.187521   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:24:54.187580   18851 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:54.187590   18851 oci.go:664] temporary error: container force-systemd-flag-818000 status is  but expect it to be exited
	I0415 12:24:54.187611   18851 retry.go:31] will retry after 613.44446ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:54.803274   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:54.855725   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:24:54.855777   18851 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:54.855791   18851 oci.go:664] temporary error: container force-systemd-flag-818000 status is  but expect it to be exited
	I0415 12:24:54.855818   18851 retry.go:31] will retry after 688.497421ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:55.546188   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:55.599902   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:24:55.599947   18851 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:55.599958   18851 oci.go:664] temporary error: container force-systemd-flag-818000 status is  but expect it to be exited
	I0415 12:24:55.599985   18851 retry.go:31] will retry after 590.193743ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:56.192258   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:56.243739   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:24:56.243784   18851 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:56.243794   18851 oci.go:664] temporary error: container force-systemd-flag-818000 status is  but expect it to be exited
	I0415 12:24:56.243820   18851 retry.go:31] will retry after 2.287653475s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:58.532551   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:24:58.585557   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:24:58.585607   18851 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:24:58.585619   18851 oci.go:664] temporary error: container force-systemd-flag-818000 status is  but expect it to be exited
	I0415 12:24:58.585644   18851 retry.go:31] will retry after 1.543561667s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:25:00.129689   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:25:00.182058   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:00.182107   18851 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:25:00.182119   18851 oci.go:664] temporary error: container force-systemd-flag-818000 status is  but expect it to be exited
	I0415 12:25:00.182143   18851 retry.go:31] will retry after 4.890010473s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:25:05.072703   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:25:05.124408   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:05.124459   18851 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:25:05.124473   18851 oci.go:664] temporary error: container force-systemd-flag-818000 status is  but expect it to be exited
	I0415 12:25:05.124498   18851 retry.go:31] will retry after 5.135259289s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:25:10.260029   18851 cli_runner.go:164] Run: docker container inspect force-systemd-flag-818000 --format={{.State.Status}}
	W0415 12:25:10.310463   18851 cli_runner.go:211] docker container inspect force-systemd-flag-818000 --format={{.State.Status}} returned with exit code 1
	I0415 12:25:10.310515   18851 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:25:10.310524   18851 oci.go:664] temporary error: container force-systemd-flag-818000 status is  but expect it to be exited
	I0415 12:25:10.310554   18851 oci.go:88] couldn't shut down force-systemd-flag-818000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	 
	I0415 12:25:10.310652   18851 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-818000
	I0415 12:25:10.359672   18851 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-818000
	W0415 12:25:10.409443   18851 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-818000 returned with exit code 1
	I0415 12:25:10.409543   18851 cli_runner.go:164] Run: docker network inspect force-systemd-flag-818000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:25:10.459513   18851 cli_runner.go:164] Run: docker network rm force-systemd-flag-818000
	I0415 12:25:10.573359   18851 fix.go:124] Sleeping 1 second for extra luck!
	I0415 12:25:11.574807   18851 start.go:125] createHost starting for "" (driver="docker")
	I0415 12:25:11.596905   18851 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 12:25:11.597064   18851 start.go:159] libmachine.API.Create for "force-systemd-flag-818000" (driver="docker")
	I0415 12:25:11.597095   18851 client.go:168] LocalClient.Create starting
	I0415 12:25:11.597316   18851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 12:25:11.597408   18851 main.go:141] libmachine: Decoding PEM data...
	I0415 12:25:11.597432   18851 main.go:141] libmachine: Parsing certificate...
	I0415 12:25:11.597512   18851 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 12:25:11.597583   18851 main.go:141] libmachine: Decoding PEM data...
	I0415 12:25:11.597597   18851 main.go:141] libmachine: Parsing certificate...
	I0415 12:25:11.598358   18851 cli_runner.go:164] Run: docker network inspect force-systemd-flag-818000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 12:25:11.649084   18851 cli_runner.go:211] docker network inspect force-systemd-flag-818000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 12:25:11.649186   18851 network_create.go:281] running [docker network inspect force-systemd-flag-818000] to gather additional debugging logs...
	I0415 12:25:11.649213   18851 cli_runner.go:164] Run: docker network inspect force-systemd-flag-818000
	W0415 12:25:11.698512   18851 cli_runner.go:211] docker network inspect force-systemd-flag-818000 returned with exit code 1
	I0415 12:25:11.698541   18851 network_create.go:284] error running [docker network inspect force-systemd-flag-818000]: docker network inspect force-systemd-flag-818000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-818000 not found
	I0415 12:25:11.698554   18851 network_create.go:286] output of [docker network inspect force-systemd-flag-818000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-818000 not found
	
	** /stderr **
	I0415 12:25:11.698680   18851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:25:11.749934   18851 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:11.751257   18851 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:11.752825   18851 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:11.754488   18851 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:11.756082   18851 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:25:11.756431   18851 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002413980}
	I0415 12:25:11.756443   18851 network_create.go:124] attempt to create docker network force-systemd-flag-818000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0415 12:25:11.756516   18851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-818000 force-systemd-flag-818000
	I0415 12:25:11.840578   18851 network_create.go:108] docker network force-systemd-flag-818000 192.168.94.0/24 created
	I0415 12:25:11.840622   18851 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-818000" container
	I0415 12:25:11.840726   18851 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 12:25:11.892731   18851 cli_runner.go:164] Run: docker volume create force-systemd-flag-818000 --label name.minikube.sigs.k8s.io=force-systemd-flag-818000 --label created_by.minikube.sigs.k8s.io=true
	I0415 12:25:11.941972   18851 oci.go:103] Successfully created a docker volume force-systemd-flag-818000
	I0415 12:25:11.942087   18851 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-818000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-818000 --entrypoint /usr/bin/test -v force-systemd-flag-818000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 12:25:12.237238   18851 oci.go:107] Successfully prepared a docker volume force-systemd-flag-818000
	I0415 12:25:12.237294   18851 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:25:12.237311   18851 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 12:25:12.237418   18851 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-818000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 12:31:11.624631   18851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:31:11.624766   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:11.677351   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:11.677468   18851 retry.go:31] will retry after 339.322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:12.017556   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:12.068881   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:12.069010   18851 retry.go:31] will retry after 386.008269ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:12.455701   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:12.508595   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:12.508704   18851 retry.go:31] will retry after 735.998753ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:13.247629   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:13.298951   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	W0415 12:31:13.299064   18851 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	
	W0415 12:31:13.299091   18851 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:13.299147   18851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:31:13.299210   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:13.350518   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:13.350615   18851 retry.go:31] will retry after 345.179363ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:13.696464   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:13.748216   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:13.748315   18851 retry.go:31] will retry after 414.423283ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:14.163390   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:14.217191   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:14.217286   18851 retry.go:31] will retry after 366.516567ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:14.584649   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:14.637364   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	W0415 12:31:14.637473   18851 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	
	W0415 12:31:14.637489   18851 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:14.637504   18851 start.go:128] duration metric: took 6m3.033914442s to createHost
	I0415 12:31:14.637575   18851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:31:14.637634   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:14.687549   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:14.687640   18851 retry.go:31] will retry after 374.78626ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:15.063414   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:15.117089   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:15.117189   18851 retry.go:31] will retry after 312.586553ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:15.431356   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:15.484973   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:15.485072   18851 retry.go:31] will retry after 836.92086ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:16.323007   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:16.437019   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	W0415 12:31:16.437145   18851 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	
	W0415 12:31:16.437170   18851 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:16.437257   18851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:31:16.437342   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:16.486536   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:16.486630   18851 retry.go:31] will retry after 133.17368ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:16.621794   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:16.671881   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:16.671983   18851 retry.go:31] will retry after 450.165199ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:17.123446   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:17.175315   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	I0415 12:31:17.175408   18851 retry.go:31] will retry after 834.015137ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:18.012249   18851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000
	W0415 12:31:18.064496   18851 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000 returned with exit code 1
	W0415 12:31:18.064607   18851 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	
	W0415 12:31:18.064630   18851 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-818000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-818000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	I0415 12:31:18.064642   18851 fix.go:56] duration metric: took 6m25.26769596s for fixHost
	I0415 12:31:18.064650   18851 start.go:83] releasing machines lock for "force-systemd-flag-818000", held for 6m25.2677448s
	W0415 12:31:18.064732   18851 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-818000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-818000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 12:31:18.107418   18851 out.go:177] 
	W0415 12:31:18.128478   18851 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 12:31:18.128596   18851 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 12:31:18.128632   18851 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 12:31:18.171506   18851 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-818000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-818000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-818000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (202.28076ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-818000 host status: state: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-818000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-15 12:31:18.429848 -0700 PDT m=+6883.967167514
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-818000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-818000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-818000",
	        "Id": "10b5e075c03463b1eee7d8007e65a2c66f4910f5de085189ecd6db9b78e58012",
	        "Created": "2024-04-15T19:25:11.801926792Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-818000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-818000 -n force-systemd-flag-818000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-818000 -n force-systemd-flag-818000: exit status 7 (112.045293ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 12:31:18.593563   19344 status.go:249] status error: host: state: unknown state "force-systemd-flag-818000": docker container inspect force-systemd-flag-818000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-818000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-818000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-818000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-818000
--- FAIL: TestForceSystemdFlag (755.99s)

                                                
                                    
x
+
TestForceSystemdEnv (755.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-370000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0415 12:07:46.055919    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 12:09:42.356282    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 12:09:42.911568    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 12:12:45.410010    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 12:14:42.356711    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 12:14:42.912609    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-370000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.608926332s)

                                                
                                                
-- stdout --
	* [force-systemd-env-370000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-370000" primary control-plane node in "force-systemd-env-370000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-370000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 12:06:44.746847   18462 out.go:291] Setting OutFile to fd 1 ...
	I0415 12:06:44.747097   18462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:06:44.747103   18462 out.go:304] Setting ErrFile to fd 2...
	I0415 12:06:44.747106   18462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 12:06:44.747283   18462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 12:06:44.748756   18462 out.go:298] Setting JSON to false
	I0415 12:06:44.772850   18462 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7575,"bootTime":1713200429,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 12:06:44.772970   18462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 12:06:44.795373   18462 out.go:177] * [force-systemd-env-370000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 12:06:44.857779   18462 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 12:06:44.836691   18462 notify.go:220] Checking for updates...
	I0415 12:06:44.899675   18462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 12:06:44.920733   18462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 12:06:44.941660   18462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 12:06:44.962702   18462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	I0415 12:06:44.983680   18462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0415 12:06:45.005751   18462 config.go:182] Loaded profile config "offline-docker-347000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 12:06:45.005909   18462 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 12:06:45.061874   18462 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 12:06:45.062041   18462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 12:06:45.161692   18462 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:100 OomKillDisable:false NGoroutines:205 SystemTime:2024-04-15 19:06:45.151518721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined na
me=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM
) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 12:06:45.203452   18462 out.go:177] * Using the docker driver based on user configuration
	I0415 12:06:45.224510   18462 start.go:297] selected driver: docker
	I0415 12:06:45.224541   18462 start.go:901] validating driver "docker" against <nil>
	I0415 12:06:45.224556   18462 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 12:06:45.228687   18462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 12:06:45.324913   18462 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:100 OomKillDisable:false NGoroutines:205 SystemTime:2024-04-15 19:06:45.315342249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined na
me=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker
Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM
) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 12:06:45.325126   18462 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 12:06:45.325310   18462 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 12:06:45.346419   18462 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 12:06:45.367378   18462 cni.go:84] Creating CNI manager for ""
	I0415 12:06:45.367423   18462 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 12:06:45.367441   18462 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 12:06:45.367560   18462 start.go:340] cluster config:
	{Name:force-systemd-env-370000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-370000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 12:06:45.389156   18462 out.go:177] * Starting "force-systemd-env-370000" primary control-plane node in "force-systemd-env-370000" cluster
	I0415 12:06:45.431266   18462 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 12:06:45.452241   18462 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 12:06:45.494202   18462 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:06:45.494236   18462 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 12:06:45.494277   18462 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 12:06:45.494296   18462 cache.go:56] Caching tarball of preloaded images
	I0415 12:06:45.494534   18462 preload.go:173] Found /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 12:06:45.494555   18462 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 12:06:45.494666   18462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/force-systemd-env-370000/config.json ...
	I0415 12:06:45.495371   18462 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/force-systemd-env-370000/config.json: {Name:mkb2a874cf625eb0031fec2f3deb1fce8a331ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 12:06:45.545381   18462 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 12:06:45.545429   18462 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 12:06:45.545462   18462 cache.go:194] Successfully downloaded all kic artifacts
	I0415 12:06:45.545514   18462 start.go:360] acquireMachinesLock for force-systemd-env-370000: {Name:mk7f842bf57749b82c816320b34117ef7f8505f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 12:06:45.545687   18462 start.go:364] duration metric: took 160.728µs to acquireMachinesLock for "force-systemd-env-370000"
	I0415 12:06:45.545714   18462 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-370000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-370000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 12:06:45.545775   18462 start.go:125] createHost starting for "" (driver="docker")
	I0415 12:06:45.588032   18462 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 12:06:45.588461   18462 start.go:159] libmachine.API.Create for "force-systemd-env-370000" (driver="docker")
	I0415 12:06:45.588504   18462 client.go:168] LocalClient.Create starting
	I0415 12:06:45.588693   18462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 12:06:45.588787   18462 main.go:141] libmachine: Decoding PEM data...
	I0415 12:06:45.588819   18462 main.go:141] libmachine: Parsing certificate...
	I0415 12:06:45.588911   18462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 12:06:45.588992   18462 main.go:141] libmachine: Decoding PEM data...
	I0415 12:06:45.589007   18462 main.go:141] libmachine: Parsing certificate...
	I0415 12:06:45.589817   18462 cli_runner.go:164] Run: docker network inspect force-systemd-env-370000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 12:06:45.639747   18462 cli_runner.go:211] docker network inspect force-systemd-env-370000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 12:06:45.639843   18462 network_create.go:281] running [docker network inspect force-systemd-env-370000] to gather additional debugging logs...
	I0415 12:06:45.639856   18462 cli_runner.go:164] Run: docker network inspect force-systemd-env-370000
	W0415 12:06:45.689141   18462 cli_runner.go:211] docker network inspect force-systemd-env-370000 returned with exit code 1
	I0415 12:06:45.689180   18462 network_create.go:284] error running [docker network inspect force-systemd-env-370000]: docker network inspect force-systemd-env-370000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-370000 not found
	I0415 12:06:45.689191   18462 network_create.go:286] output of [docker network inspect force-systemd-env-370000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-370000 not found
	
	** /stderr **
	I0415 12:06:45.689328   18462 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:06:45.740203   18462 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:06:45.741813   18462 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:06:45.743372   18462 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:06:45.743787   18462 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021fabc0}
	I0415 12:06:45.743809   18462 network_create.go:124] attempt to create docker network force-systemd-env-370000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 12:06:45.743931   18462 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-370000 force-systemd-env-370000
	W0415 12:06:45.793663   18462 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-370000 force-systemd-env-370000 returned with exit code 1
	W0415 12:06:45.793714   18462 network_create.go:149] failed to create docker network force-systemd-env-370000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-370000 force-systemd-env-370000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 12:06:45.793739   18462 network_create.go:116] failed to create docker network force-systemd-env-370000 192.168.76.0/24, will retry: subnet is taken
	I0415 12:06:45.795355   18462 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:06:45.795749   18462 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022aab90}
	I0415 12:06:45.795762   18462 network_create.go:124] attempt to create docker network force-systemd-env-370000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 12:06:45.795830   18462 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-370000 force-systemd-env-370000
	I0415 12:06:45.880746   18462 network_create.go:108] docker network force-systemd-env-370000 192.168.85.0/24 created
	I0415 12:06:45.880800   18462 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-370000" container
	I0415 12:06:45.880919   18462 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 12:06:45.931991   18462 cli_runner.go:164] Run: docker volume create force-systemd-env-370000 --label name.minikube.sigs.k8s.io=force-systemd-env-370000 --label created_by.minikube.sigs.k8s.io=true
	I0415 12:06:45.982030   18462 oci.go:103] Successfully created a docker volume force-systemd-env-370000
	I0415 12:06:45.982144   18462 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-370000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-370000 --entrypoint /usr/bin/test -v force-systemd-env-370000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 12:06:46.392502   18462 oci.go:107] Successfully prepared a docker volume force-systemd-env-370000
	I0415 12:06:46.392564   18462 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:06:46.392580   18462 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 12:06:46.392667   18462 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-370000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 12:12:45.648029   18462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:12:45.648179   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:12:45.700164   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:12:45.700314   18462 retry.go:31] will retry after 211.701107ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:45.914391   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:12:45.967384   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:12:45.967508   18462 retry.go:31] will retry after 302.456304ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:46.272380   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:12:46.323623   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:12:46.323725   18462 retry.go:31] will retry after 808.213325ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:47.134380   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:12:47.187152   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	W0415 12:12:47.187255   18462 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	
	W0415 12:12:47.187270   18462 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:47.187340   18462 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:12:47.187416   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:12:47.236572   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:12:47.236661   18462 retry.go:31] will retry after 323.363669ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:47.562037   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:12:47.614956   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:12:47.615065   18462 retry.go:31] will retry after 376.958041ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:47.993703   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:12:48.048404   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:12:48.048510   18462 retry.go:31] will retry after 663.136547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:48.713123   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:12:48.766661   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	W0415 12:12:48.766757   18462 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	
	W0415 12:12:48.766772   18462 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:48.766787   18462 start.go:128] duration metric: took 6m3.162080373s to createHost
	I0415 12:12:48.766795   18462 start.go:83] releasing machines lock for "force-systemd-env-370000", held for 6m3.162182081s
	W0415 12:12:48.766810   18462 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 12:12:48.767231   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:48.816973   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:48.817041   18462 delete.go:82] Unable to get host status for force-systemd-env-370000, assuming it has already been deleted: state: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	W0415 12:12:48.817141   18462 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 12:12:48.817152   18462 start.go:728] Will try again in 5 seconds ...
	I0415 12:12:53.820011   18462 start.go:360] acquireMachinesLock for force-systemd-env-370000: {Name:mk7f842bf57749b82c816320b34117ef7f8505f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 12:12:53.820335   18462 start.go:364] duration metric: took 164.467µs to acquireMachinesLock for "force-systemd-env-370000"
	I0415 12:12:53.820379   18462 start.go:96] Skipping create...Using existing machine configuration
	I0415 12:12:53.820396   18462 fix.go:54] fixHost starting: 
	I0415 12:12:53.820936   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:53.871156   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:53.871201   18462 fix.go:112] recreateIfNeeded on force-systemd-env-370000: state= err=unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:53.871219   18462 fix.go:117] machineExists: false. err=machine does not exist
	I0415 12:12:53.914640   18462 out.go:177] * docker "force-systemd-env-370000" container is missing, will recreate.
	I0415 12:12:53.935573   18462 delete.go:124] DEMOLISHING force-systemd-env-370000 ...
	I0415 12:12:53.935767   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:53.985799   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	W0415 12:12:53.985856   18462 stop.go:83] unable to get state: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:53.985876   18462 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:53.986256   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:54.035038   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:54.035100   18462 delete.go:82] Unable to get host status for force-systemd-env-370000, assuming it has already been deleted: state: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:54.035187   18462 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-370000
	W0415 12:12:54.083998   18462 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-370000 returned with exit code 1
	I0415 12:12:54.084039   18462 kic.go:371] could not find the container force-systemd-env-370000 to remove it. will try anyways
	I0415 12:12:54.084115   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:54.133279   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	W0415 12:12:54.133334   18462 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:54.133421   18462 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-370000 /bin/bash -c "sudo init 0"
	W0415 12:12:54.182479   18462 cli_runner.go:211] docker exec --privileged -t force-systemd-env-370000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 12:12:54.182515   18462 oci.go:650] error shutdown force-systemd-env-370000: docker exec --privileged -t force-systemd-env-370000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:55.182922   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:55.233772   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:55.233817   18462 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:55.233831   18462 oci.go:664] temporary error: container force-systemd-env-370000 status is  but expect it to be exited
	I0415 12:12:55.233858   18462 retry.go:31] will retry after 364.492159ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:55.599780   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:55.652080   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:55.652127   18462 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:55.652138   18462 oci.go:664] temporary error: container force-systemd-env-370000 status is  but expect it to be exited
	I0415 12:12:55.652165   18462 retry.go:31] will retry after 953.203536ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:56.607730   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:56.661320   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:56.661371   18462 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:56.661386   18462 oci.go:664] temporary error: container force-systemd-env-370000 status is  but expect it to be exited
	I0415 12:12:56.661410   18462 retry.go:31] will retry after 1.367880847s: couldn't verify container is exited. %v: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:58.029771   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:58.082502   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:58.082573   18462 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:58.082587   18462 oci.go:664] temporary error: container force-systemd-env-370000 status is  but expect it to be exited
	I0415 12:12:58.082613   18462 retry.go:31] will retry after 1.362533852s: couldn't verify container is exited. %v: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:59.445739   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:12:59.498661   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:12:59.498715   18462 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:12:59.498725   18462 oci.go:664] temporary error: container force-systemd-env-370000 status is  but expect it to be exited
	I0415 12:12:59.498754   18462 retry.go:31] will retry after 2.178807755s: couldn't verify container is exited. %v: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:13:01.679630   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:13:01.731485   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:13:01.731545   18462 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:13:01.731563   18462 oci.go:664] temporary error: container force-systemd-env-370000 status is  but expect it to be exited
	I0415 12:13:01.731588   18462 retry.go:31] will retry after 4.321802819s: couldn't verify container is exited. %v: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:13:06.054644   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:13:06.107399   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:13:06.107448   18462 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:13:06.107460   18462 oci.go:664] temporary error: container force-systemd-env-370000 status is  but expect it to be exited
	I0415 12:13:06.107483   18462 retry.go:31] will retry after 4.957086316s: couldn't verify container is exited. %v: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:13:11.065707   18462 cli_runner.go:164] Run: docker container inspect force-systemd-env-370000 --format={{.State.Status}}
	W0415 12:13:11.117865   18462 cli_runner.go:211] docker container inspect force-systemd-env-370000 --format={{.State.Status}} returned with exit code 1
	I0415 12:13:11.117914   18462 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:13:11.117923   18462 oci.go:664] temporary error: container force-systemd-env-370000 status is  but expect it to be exited
	I0415 12:13:11.117962   18462 oci.go:88] couldn't shut down force-systemd-env-370000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	 
	I0415 12:13:11.118035   18462 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-370000
	I0415 12:13:11.167531   18462 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-370000
	W0415 12:13:11.216572   18462 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-370000 returned with exit code 1
	I0415 12:13:11.216681   18462 cli_runner.go:164] Run: docker network inspect force-systemd-env-370000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:13:11.266499   18462 cli_runner.go:164] Run: docker network rm force-systemd-env-370000
	I0415 12:13:11.377324   18462 fix.go:124] Sleeping 1 second for extra luck!
	I0415 12:13:12.377780   18462 start.go:125] createHost starting for "" (driver="docker")
	I0415 12:13:12.416013   18462 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 12:13:12.416194   18462 start.go:159] libmachine.API.Create for "force-systemd-env-370000" (driver="docker")
	I0415 12:13:12.416221   18462 client.go:168] LocalClient.Create starting
	I0415 12:13:12.416454   18462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 12:13:12.416566   18462 main.go:141] libmachine: Decoding PEM data...
	I0415 12:13:12.416596   18462 main.go:141] libmachine: Parsing certificate...
	I0415 12:13:12.416683   18462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 12:13:12.416757   18462 main.go:141] libmachine: Decoding PEM data...
	I0415 12:13:12.416773   18462 main.go:141] libmachine: Parsing certificate...
	I0415 12:13:12.437147   18462 cli_runner.go:164] Run: docker network inspect force-systemd-env-370000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 12:13:12.487322   18462 cli_runner.go:211] docker network inspect force-systemd-env-370000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 12:13:12.487435   18462 network_create.go:281] running [docker network inspect force-systemd-env-370000] to gather additional debugging logs...
	I0415 12:13:12.487453   18462 cli_runner.go:164] Run: docker network inspect force-systemd-env-370000
	W0415 12:13:12.536589   18462 cli_runner.go:211] docker network inspect force-systemd-env-370000 returned with exit code 1
	I0415 12:13:12.536620   18462 network_create.go:284] error running [docker network inspect force-systemd-env-370000]: docker network inspect force-systemd-env-370000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-370000 not found
	I0415 12:13:12.536637   18462 network_create.go:286] output of [docker network inspect force-systemd-env-370000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-370000 not found
	
	** /stderr **
	I0415 12:13:12.536791   18462 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 12:13:12.588257   18462 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:13:12.589840   18462 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:13:12.591391   18462 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:13:12.592946   18462 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:13:12.594477   18462 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:13:12.596161   18462 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 12:13:12.596576   18462 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000c78db0}
	I0415 12:13:12.596589   18462 network_create.go:124] attempt to create docker network force-systemd-env-370000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0415 12:13:12.596667   18462 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-370000 force-systemd-env-370000
	I0415 12:13:12.681694   18462 network_create.go:108] docker network force-systemd-env-370000 192.168.103.0/24 created
	I0415 12:13:12.681744   18462 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-370000" container
	I0415 12:13:12.681842   18462 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 12:13:12.733265   18462 cli_runner.go:164] Run: docker volume create force-systemd-env-370000 --label name.minikube.sigs.k8s.io=force-systemd-env-370000 --label created_by.minikube.sigs.k8s.io=true
	I0415 12:13:12.782244   18462 oci.go:103] Successfully created a docker volume force-systemd-env-370000
	I0415 12:13:12.782388   18462 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-370000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-370000 --entrypoint /usr/bin/test -v force-systemd-env-370000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 12:13:13.069549   18462 oci.go:107] Successfully prepared a docker volume force-systemd-env-370000
	I0415 12:13:13.069593   18462 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 12:13:13.069606   18462 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 12:13:13.069715   18462 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-370000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 12:19:12.419997   18462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:19:12.420151   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:12.474432   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:12.474549   18462 retry.go:31] will retry after 189.166094ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:12.666038   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:12.718326   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:12.718431   18462 retry.go:31] will retry after 355.818596ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:13.076165   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:13.127390   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:13.127513   18462 retry.go:31] will retry after 510.553473ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:13.638996   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:13.691764   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	W0415 12:19:13.691896   18462 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	
	W0415 12:19:13.691914   18462 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:13.691967   18462 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:19:13.692035   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:13.741584   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:13.741690   18462 retry.go:31] will retry after 244.373351ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:13.986840   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:14.038745   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:14.038837   18462 retry.go:31] will retry after 239.11286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:14.278252   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:14.328240   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:14.328339   18462 retry.go:31] will retry after 634.528411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:14.965275   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:15.018696   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:15.018815   18462 retry.go:31] will retry after 525.900095ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:15.546188   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:15.599661   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	W0415 12:19:15.599778   18462 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	
	W0415 12:19:15.599793   18462 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:15.599799   18462 start.go:128] duration metric: took 6m3.22052804s to createHost
	I0415 12:19:15.599870   18462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 12:19:15.599925   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:15.649002   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:15.649093   18462 retry.go:31] will retry after 374.861119ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:16.024852   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:16.076861   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:16.076954   18462 retry.go:31] will retry after 502.116381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:16.579446   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:16.631365   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:16.631464   18462 retry.go:31] will retry after 722.211306ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:17.354083   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:17.406385   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	W0415 12:19:17.406487   18462 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	
	W0415 12:19:17.406505   18462 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:17.406566   18462 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 12:19:17.406628   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:17.455941   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:17.456032   18462 retry.go:31] will retry after 130.45179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:17.588792   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:17.639995   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:17.640092   18462 retry.go:31] will retry after 480.853842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:18.122216   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:18.174885   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:18.174985   18462 retry.go:31] will retry after 315.873833ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:18.493147   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:18.545570   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	I0415 12:19:18.545672   18462 retry.go:31] will retry after 611.808892ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:19.158807   18462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000
	W0415 12:19:19.211158   18462 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000 returned with exit code 1
	W0415 12:19:19.211258   18462 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	
	W0415 12:19:19.211290   18462 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-370000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-370000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	I0415 12:19:19.211304   18462 fix.go:56] duration metric: took 6m25.389370638s for fixHost
	I0415 12:19:19.211311   18462 start.go:83] releasing machines lock for "force-systemd-env-370000", held for 6m25.389422283s
	W0415 12:19:19.211389   18462 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-370000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-370000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 12:19:19.253609   18462 out.go:177] 
	W0415 12:19:19.274861   18462 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 12:19:19.274910   18462 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 12:19:19.274942   18462 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 12:19:19.295723   18462 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-370000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-370000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-370000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (201.449441ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-370000 host status: state: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-370000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-15 12:19:19.571672 -0700 PDT m=+6165.141224931
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-370000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-370000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-370000",
	        "Id": "13e031be60f962be155ddb973fefc3707d9f925540764abdb42d8b4947914c76",
	        "Created": "2024-04-15T19:13:12.642905973Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-370000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-370000 -n force-systemd-env-370000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-370000 -n force-systemd-env-370000: exit status 7 (112.374834ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 12:19:19.734738   18949 status.go:249] status error: host: state: unknown state "force-systemd-env-370000": docker container inspect force-systemd-env-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-370000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-370000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-370000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-370000
--- FAIL: TestForceSystemdEnv (755.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (884.67s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-019000 ssh -- ls /minikube-host
E0415 11:04:42.190194    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:04:42.745426    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 11:06:05.237396    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:09:42.216376    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:09:42.773907    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 11:14:42.216525    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:14:42.773334    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 11:17:45.974053    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-019000 ssh -- ls /minikube-host: signal: killed (14m44.116496042s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-019000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-019000
helpers_test.go:235: (dbg) docker inspect mount-start-2-019000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b3368d66313062665924bd9d7d7cc2b81fa046fe7e353036d29e36215440f36",
	        "Created": "2024-04-15T18:03:10.942384974Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156253,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-15T18:03:11.140069803Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06fc94f477def8d6ec1f9decaa8d9de4b332d5597cd1759a7075056e46e00dfc",
	        "ResolvConfPath": "/var/lib/docker/containers/2b3368d66313062665924bd9d7d7cc2b81fa046fe7e353036d29e36215440f36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b3368d66313062665924bd9d7d7cc2b81fa046fe7e353036d29e36215440f36/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b3368d66313062665924bd9d7d7cc2b81fa046fe7e353036d29e36215440f36/hosts",
	        "LogPath": "/var/lib/docker/containers/2b3368d66313062665924bd9d7d7cc2b81fa046fe7e353036d29e36215440f36/2b3368d66313062665924bd9d7d7cc2b81fa046fe7e353036d29e36215440f36-json.log",
	        "Name": "/mount-start-2-019000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-019000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-019000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/56cc93d44bfadeda6c1404f1bfb1f2210e09adc551f1c29027e42352ae3e798c-init/diff:/var/lib/docker/overlay2/5d56c218ec7dab8e4078746844d2c7a861726c242393ccf7538e31bbc10471d9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56cc93d44bfadeda6c1404f1bfb1f2210e09adc551f1c29027e42352ae3e798c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56cc93d44bfadeda6c1404f1bfb1f2210e09adc551f1c29027e42352ae3e798c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56cc93d44bfadeda6c1404f1bfb1f2210e09adc551f1c29027e42352ae3e798c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-019000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-019000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-019000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-019000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-019000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "768623b001332d6cf63534b0c7dfef2758a4c8e603ce48c1ec5272132d2a1272",
	            "SandboxKey": "/var/run/docker/netns/768623b00133",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54537"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54538"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54539"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54540"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54541"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-019000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b3368d66313",
	                        "mount-start-2-019000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "71e417e1dde848238db5e9712b8c6ec4af45597218d52e0d439baa4f297b4ba8",
	                    "EndpointID": "ff8b39b78c7aa3ca50f01b05ff5490d0aa0c24490329d4ba55585dff020cc7d0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-2-019000",
	                        "2b3368d66313"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-019000 -n mount-start-2-019000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-019000 -n mount-start-2-019000: exit status 6 (501.930115ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:18:01.479265   15719 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-019000" does not appear in /Users/jenkins/minikube-integration/18634-8183/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-019000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountSecond (884.67s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (755.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-070000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0415 11:19:42.216273    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:19:42.773184    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 11:22:45.267059    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:24:42.244404    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:24:42.801571    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 11:29:42.245887    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:29:42.802932    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-070000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m34.884352605s)

                                                
                                                
-- stdout --
	* [multinode-070000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-070000" primary control-plane node in "multinode-070000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-070000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:19:12.208742   15868 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:19:12.209005   15868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:19:12.209011   15868 out.go:304] Setting ErrFile to fd 2...
	I0415 11:19:12.209014   15868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:19:12.209197   15868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:19:12.210639   15868 out.go:298] Setting JSON to false
	I0415 11:19:12.232901   15868 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4723,"bootTime":1713200429,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 11:19:12.232989   15868 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 11:19:12.255041   15868 out.go:177] * [multinode-070000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 11:19:12.296648   15868 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 11:19:12.296689   15868 notify.go:220] Checking for updates...
	I0415 11:19:12.338563   15868 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 11:19:12.359650   15868 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 11:19:12.380789   15868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 11:19:12.401751   15868 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	I0415 11:19:12.422662   15868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 11:19:12.444187   15868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 11:19:12.500335   15868 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 11:19:12.500528   15868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 11:19:12.596900   15868 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:false NGoroutines:115 SystemTime:2024-04-15 18:19:12.587014305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 11:19:12.618883   15868 out.go:177] * Using the docker driver based on user configuration
	I0415 11:19:12.660307   15868 start.go:297] selected driver: docker
	I0415 11:19:12.660339   15868 start.go:901] validating driver "docker" against <nil>
	I0415 11:19:12.660353   15868 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 11:19:12.664656   15868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 11:19:12.763661   15868 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:false NGoroutines:115 SystemTime:2024-04-15 18:19:12.753959604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 11:19:12.763838   15868 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 11:19:12.764025   15868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 11:19:12.785727   15868 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 11:19:12.807395   15868 cni.go:84] Creating CNI manager for ""
	I0415 11:19:12.807426   15868 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 11:19:12.807449   15868 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 11:19:12.807580   15868 start.go:340] cluster config:
	{Name:multinode-070000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:19:12.829578   15868 out.go:177] * Starting "multinode-070000" primary control-plane node in "multinode-070000" cluster
	I0415 11:19:12.871554   15868 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 11:19:12.892383   15868 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 11:19:12.934500   15868 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 11:19:12.934556   15868 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 11:19:12.934573   15868 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 11:19:12.934595   15868 cache.go:56] Caching tarball of preloaded images
	I0415 11:19:12.934814   15868 preload.go:173] Found /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 11:19:12.934833   15868 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 11:19:12.936575   15868 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/multinode-070000/config.json ...
	I0415 11:19:12.936655   15868 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/multinode-070000/config.json: {Name:mk66b3f13fcfee61ecd0efed6af58912198eed94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 11:19:12.987255   15868 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 11:19:12.987275   15868 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 11:19:12.987298   15868 cache.go:194] Successfully downloaded all kic artifacts
	I0415 11:19:12.987361   15868 start.go:360] acquireMachinesLock for multinode-070000: {Name:mkf862a823ebac9b2411d5d0611461e02835237d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:19:12.987515   15868 start.go:364] duration metric: took 141.082µs to acquireMachinesLock for "multinode-070000"
	I0415 11:19:12.987542   15868 start.go:93] Provisioning new machine with config: &{Name:multinode-070000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-070000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 11:19:12.987637   15868 start.go:125] createHost starting for "" (driver="docker")
	I0415 11:19:13.029469   15868 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 11:19:13.029841   15868 start.go:159] libmachine.API.Create for "multinode-070000" (driver="docker")
	I0415 11:19:13.029882   15868 client.go:168] LocalClient.Create starting
	I0415 11:19:13.030121   15868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 11:19:13.030222   15868 main.go:141] libmachine: Decoding PEM data...
	I0415 11:19:13.030262   15868 main.go:141] libmachine: Parsing certificate...
	I0415 11:19:13.030357   15868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 11:19:13.030450   15868 main.go:141] libmachine: Decoding PEM data...
	I0415 11:19:13.030466   15868 main.go:141] libmachine: Parsing certificate...
	I0415 11:19:13.031338   15868 cli_runner.go:164] Run: docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 11:19:13.081376   15868 cli_runner.go:211] docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 11:19:13.081469   15868 network_create.go:281] running [docker network inspect multinode-070000] to gather additional debugging logs...
	I0415 11:19:13.081489   15868 cli_runner.go:164] Run: docker network inspect multinode-070000
	W0415 11:19:13.130480   15868 cli_runner.go:211] docker network inspect multinode-070000 returned with exit code 1
	I0415 11:19:13.130516   15868 network_create.go:284] error running [docker network inspect multinode-070000]: docker network inspect multinode-070000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-070000 not found
	I0415 11:19:13.130529   15868 network_create.go:286] output of [docker network inspect multinode-070000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-070000 not found
	
	** /stderr **
	I0415 11:19:13.130675   15868 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 11:19:13.184871   15868 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:19:13.186475   15868 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:19:13.186858   15868 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00222d0a0}
	I0415 11:19:13.186873   15868 network_create.go:124] attempt to create docker network multinode-070000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 11:19:13.186940   15868 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000
	I0415 11:19:13.272312   15868 network_create.go:108] docker network multinode-070000 192.168.67.0/24 created
	I0415 11:19:13.272351   15868 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-070000" container
	I0415 11:19:13.272446   15868 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 11:19:13.321989   15868 cli_runner.go:164] Run: docker volume create multinode-070000 --label name.minikube.sigs.k8s.io=multinode-070000 --label created_by.minikube.sigs.k8s.io=true
	I0415 11:19:13.371787   15868 oci.go:103] Successfully created a docker volume multinode-070000
	I0415 11:19:13.371885   15868 cli_runner.go:164] Run: docker run --rm --name multinode-070000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-070000 --entrypoint /usr/bin/test -v multinode-070000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 11:19:13.743646   15868 oci.go:107] Successfully prepared a docker volume multinode-070000
	I0415 11:19:13.743683   15868 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 11:19:13.743699   15868 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 11:19:13.743791   15868 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-070000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 11:25:13.059139   15868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:25:13.059276   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:25:13.110917   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:25:13.111020   15868 retry.go:31] will retry after 259.780998ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:13.371579   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:25:13.422745   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:25:13.422850   15868 retry.go:31] will retry after 226.543516ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:13.649785   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:25:13.701937   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:25:13.702042   15868 retry.go:31] will retry after 738.158398ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:14.440754   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:25:14.493539   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:25:14.493641   15868 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:25:14.493660   15868 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:14.493713   15868 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 11:25:14.493770   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:25:14.542792   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:25:14.542877   15868 retry.go:31] will retry after 255.051584ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:14.799960   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:25:14.854021   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:25:14.854120   15868 retry.go:31] will retry after 294.819063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:15.151341   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:25:15.202753   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:25:15.202846   15868 retry.go:31] will retry after 435.267782ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:15.639603   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:25:15.692032   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:25:15.692143   15868 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:25:15.692167   15868 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:15.692180   15868 start.go:128] duration metric: took 6m2.675903088s to createHost
	I0415 11:25:15.692187   15868 start.go:83] releasing machines lock for "multinode-070000", held for 6m2.676039403s
	W0415 11:25:15.692204   15868 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 11:25:15.692653   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:15.740931   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:15.740980   15868 delete.go:82] Unable to get host status for multinode-070000, assuming it has already been deleted: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	W0415 11:25:15.741057   15868 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 11:25:15.741069   15868 start.go:728] Will try again in 5 seconds ...
	I0415 11:25:20.742416   15868 start.go:360] acquireMachinesLock for multinode-070000: {Name:mkf862a823ebac9b2411d5d0611461e02835237d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:25:20.742688   15868 start.go:364] duration metric: took 156.864µs to acquireMachinesLock for "multinode-070000"
	I0415 11:25:20.742727   15868 start.go:96] Skipping create...Using existing machine configuration
	I0415 11:25:20.742744   15868 fix.go:54] fixHost starting: 
	I0415 11:25:20.743262   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:20.795828   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:20.795872   15868 fix.go:112] recreateIfNeeded on multinode-070000: state= err=unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:20.795889   15868 fix.go:117] machineExists: false. err=machine does not exist
	I0415 11:25:20.817725   15868 out.go:177] * docker "multinode-070000" container is missing, will recreate.
	I0415 11:25:20.859263   15868 delete.go:124] DEMOLISHING multinode-070000 ...
	I0415 11:25:20.859433   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:20.909631   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	W0415 11:25:20.909688   15868 stop.go:83] unable to get state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:20.909706   15868 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:20.910083   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:20.959193   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:20.959242   15868 delete.go:82] Unable to get host status for multinode-070000, assuming it has already been deleted: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:20.959326   15868 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-070000
	W0415 11:25:21.008312   15868 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-070000 returned with exit code 1
	I0415 11:25:21.008347   15868 kic.go:371] could not find the container multinode-070000 to remove it. will try anyways
	I0415 11:25:21.008413   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:21.057971   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	W0415 11:25:21.058031   15868 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:21.058115   15868 cli_runner.go:164] Run: docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0"
	W0415 11:25:21.108207   15868 cli_runner.go:211] docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 11:25:21.108239   15868 oci.go:650] error shutdown multinode-070000: docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:22.110661   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:22.163611   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:22.163655   15868 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:22.163669   15868 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:25:22.163692   15868 retry.go:31] will retry after 358.509334ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:22.524668   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:22.575245   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:22.575293   15868 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:22.575302   15868 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:25:22.575321   15868 retry.go:31] will retry after 505.2885ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:23.080935   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:23.135431   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:23.135479   15868 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:23.135493   15868 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:25:23.135520   15868 retry.go:31] will retry after 1.320344859s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:24.457467   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:24.509388   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:24.509432   15868 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:24.509448   15868 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:25:24.509471   15868 retry.go:31] will retry after 1.648416574s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:26.158181   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:26.209898   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:26.209965   15868 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:26.209976   15868 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:25:26.210009   15868 retry.go:31] will retry after 2.924911863s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:29.135899   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:29.186886   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:29.186935   15868 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:29.186949   15868 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:25:29.186975   15868 retry.go:31] will retry after 3.855724614s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:33.045128   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:33.097895   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:33.097937   15868 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:33.097948   15868 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:25:33.097972   15868 retry.go:31] will retry after 6.39217392s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:39.490631   15868 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:25:39.543468   15868 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:25:39.543518   15868 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:25:39.543534   15868 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:25:39.543561   15868 oci.go:88] couldn't shut down multinode-070000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	 
	I0415 11:25:39.543640   15868 cli_runner.go:164] Run: docker rm -f -v multinode-070000
	I0415 11:25:39.594885   15868 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-070000
	W0415 11:25:39.643596   15868 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-070000 returned with exit code 1
	I0415 11:25:39.643707   15868 cli_runner.go:164] Run: docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 11:25:39.693399   15868 cli_runner.go:164] Run: docker network rm multinode-070000
	I0415 11:25:39.803286   15868 fix.go:124] Sleeping 1 second for extra luck!
	I0415 11:25:40.803745   15868 start.go:125] createHost starting for "" (driver="docker")
	I0415 11:25:40.825651   15868 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 11:25:40.825834   15868 start.go:159] libmachine.API.Create for "multinode-070000" (driver="docker")
	I0415 11:25:40.825858   15868 client.go:168] LocalClient.Create starting
	I0415 11:25:40.826071   15868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 11:25:40.826166   15868 main.go:141] libmachine: Decoding PEM data...
	I0415 11:25:40.826191   15868 main.go:141] libmachine: Parsing certificate...
	I0415 11:25:40.826273   15868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 11:25:40.826350   15868 main.go:141] libmachine: Decoding PEM data...
	I0415 11:25:40.826366   15868 main.go:141] libmachine: Parsing certificate...
	I0415 11:25:40.827112   15868 cli_runner.go:164] Run: docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 11:25:40.880071   15868 cli_runner.go:211] docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 11:25:40.880156   15868 network_create.go:281] running [docker network inspect multinode-070000] to gather additional debugging logs...
	I0415 11:25:40.880176   15868 cli_runner.go:164] Run: docker network inspect multinode-070000
	W0415 11:25:40.929910   15868 cli_runner.go:211] docker network inspect multinode-070000 returned with exit code 1
	I0415 11:25:40.929939   15868 network_create.go:284] error running [docker network inspect multinode-070000]: docker network inspect multinode-070000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-070000 not found
	I0415 11:25:40.929957   15868 network_create.go:286] output of [docker network inspect multinode-070000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-070000 not found
	
	** /stderr **
	I0415 11:25:40.930096   15868 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 11:25:40.981363   15868 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:25:40.982952   15868 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:25:40.984386   15868 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:25:40.984723   15868 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021f7a80}
	I0415 11:25:40.984735   15868 network_create.go:124] attempt to create docker network multinode-070000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 11:25:40.984801   15868 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000
	W0415 11:25:41.034372   15868 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000 returned with exit code 1
	W0415 11:25:41.034411   15868 network_create.go:149] failed to create docker network multinode-070000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 11:25:41.034430   15868 network_create.go:116] failed to create docker network multinode-070000 192.168.76.0/24, will retry: subnet is taken
	I0415 11:25:41.036051   15868 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:25:41.036563   15868 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002260c50}
	I0415 11:25:41.036578   15868 network_create.go:124] attempt to create docker network multinode-070000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 11:25:41.036655   15868 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000
	I0415 11:25:41.120871   15868 network_create.go:108] docker network multinode-070000 192.168.85.0/24 created
	I0415 11:25:41.120910   15868 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-070000" container
	I0415 11:25:41.121027   15868 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 11:25:41.170663   15868 cli_runner.go:164] Run: docker volume create multinode-070000 --label name.minikube.sigs.k8s.io=multinode-070000 --label created_by.minikube.sigs.k8s.io=true
	I0415 11:25:41.219716   15868 oci.go:103] Successfully created a docker volume multinode-070000
	I0415 11:25:41.219829   15868 cli_runner.go:164] Run: docker run --rm --name multinode-070000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-070000 --entrypoint /usr/bin/test -v multinode-070000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 11:25:41.500696   15868 oci.go:107] Successfully prepared a docker volume multinode-070000
	I0415 11:25:41.500724   15868 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 11:25:41.500737   15868 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 11:25:41.500843   15868 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-070000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 11:31:40.827411   15868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:31:40.827520   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:40.880807   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:40.880917   15868 retry.go:31] will retry after 339.516838ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:41.222838   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:41.275432   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:41.275548   15868 retry.go:31] will retry after 294.738399ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:41.570645   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:41.621819   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:41.621927   15868 retry.go:31] will retry after 616.013291ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:42.240290   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:42.313083   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:31:42.313188   15868 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:31:42.313208   15868 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:42.313265   15868 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 11:31:42.313318   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:42.362606   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:42.362703   15868 retry.go:31] will retry after 336.219653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:42.701211   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:42.753680   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:42.753771   15868 retry.go:31] will retry after 447.244687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:43.201815   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:43.254915   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:43.255030   15868 retry.go:31] will retry after 711.58768ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:43.968979   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:44.020887   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:31:44.020994   15868 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:31:44.021013   15868 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:44.021030   15868 start.go:128] duration metric: took 6m3.217004202s to createHost
	I0415 11:31:44.021093   15868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:31:44.021169   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:44.070703   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:44.070799   15868 retry.go:31] will retry after 273.245733ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:44.346412   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:44.399137   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:44.399240   15868 retry.go:31] will retry after 217.664135ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:44.618982   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:44.671769   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:44.671863   15868 retry.go:31] will retry after 822.824163ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:45.497054   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:45.550880   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:31:45.550984   15868 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:31:45.551002   15868 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:45.551053   15868 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 11:31:45.551115   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:45.601371   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:45.601463   15868 retry.go:31] will retry after 131.735389ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:45.734026   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:45.784692   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:45.784787   15868 retry.go:31] will retry after 535.604306ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:46.320772   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:46.372621   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:31:46.372718   15868 retry.go:31] will retry after 486.022628ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:46.859106   15868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:31:46.912615   15868 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:31:46.912716   15868 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:31:46.912738   15868 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:31:46.912753   15868 fix.go:56] duration metric: took 6m26.16966521s for fixHost
	I0415 11:31:46.912761   15868 start.go:83] releasing machines lock for "multinode-070000", held for 6m26.1697096s
	W0415 11:31:46.912834   15868 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-070000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-070000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 11:31:46.956135   15868 out.go:177] 
	W0415 11:31:46.977183   15868 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 11:31:46.977208   15868 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 11:31:46.977236   15868 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 11:31:46.998393   15868 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-070000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "b2f32f06ad80a2c67de9c394cf2c2e2ce6ee97013fbb63c62c02b1663e392d61",
	        "Created": "2024-04-15T18:25:41.082088823Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (112.911385ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:31:47.237067   16480 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (755.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (91.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (101.460405ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-070000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- rollout status deployment/busybox: exit status 1 (100.918373ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.811241ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.802834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.970555ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.087117ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.504874ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.621423ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.999638ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.44062ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.670953ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.091188ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.118148ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.542856ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- exec  -- nslookup kubernetes.io: exit status 1 (101.14051ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- exec  -- nslookup kubernetes.default: exit status 1 (101.356898ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (101.340337ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "b2f32f06ad80a2c67de9c394cf2c2e2ce6ee97013fbb63c62c02b1663e392d61",
	        "Created": "2024-04-15T18:25:41.082088823Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (113.716404ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:33:18.862071   16573 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (91.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-070000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.222955ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-070000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "b2f32f06ad80a2c67de9c394cf2c2e2ce6ee97013fbb63c62c02b1663e392d61",
	        "Created": "2024-04-15T18:25:41.082088823Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (114.011447ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:33:19.129421   16582 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-070000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-070000 -v 3 --alsologtostderr: exit status 80 (199.659766ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:19.191516   16586 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:19.191703   16586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:19.191709   16586 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:19.191712   16586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:19.192409   16586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:19.193139   16586 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:19.193401   16586 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:19.193772   16586 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:19.242800   16586 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:19.264930   16586 out.go:177] 
	W0415 11:33:19.286655   16586 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-070000 host status: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-070000 host status: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	I0415 11:33:19.307425   16586 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-070000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "b2f32f06ad80a2c67de9c394cf2c2e2ce6ee97013fbb63c62c02b1663e392d61",
	        "Created": "2024-04-15T18:25:41.082088823Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (114.525357ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:33:19.496830   16592 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-070000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-070000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (36.797888ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-070000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-070000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-070000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "b2f32f06ad80a2c67de9c394cf2c2e2ce6ee97013fbb63c62c02b1663e392d61",
	        "Created": "2024-04-15T18:25:41.082088823Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (114.299349ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:33:19.701517   16599 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-070000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-019000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-070000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-070000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-070000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "b2f32f06ad80a2c67de9c394cf2c2e2ce6ee97013fbb63c62c02b1663e392d61",
	        "Created": "2024-04-15T18:25:41.082088823Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (113.042173ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:33:20.055567   16611 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status --output json --alsologtostderr: exit status 7 (113.914845ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-070000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:20.118624   16615 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:20.118824   16615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:20.118829   16615 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:20.118833   16615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:20.119016   16615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:20.119189   16615 out.go:298] Setting JSON to true
	I0415 11:33:20.119212   16615 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:20.119260   16615 notify.go:220] Checking for updates...
	I0415 11:33:20.119484   16615 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:20.119500   16615 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:20.119891   16615 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:20.169545   16615 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:20.169599   16615 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:20.169618   16615 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:20.169641   16615 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:20.169649   16615 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-070000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "b2f32f06ad80a2c67de9c394cf2c2e2ce6ee97013fbb63c62c02b1663e392d61",
	        "Created": "2024-04-15T18:25:41.082088823Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (112.898902ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:33:20.335553   16621 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 node stop m03: exit status 85 (156.116249ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-070000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status: exit status 7 (114.371788ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:33:20.606614   16627 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:20.606627   16627 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr: exit status 7 (114.515157ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:20.669964   16631 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:20.670231   16631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:20.670238   16631 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:20.670241   16631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:20.670432   16631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:20.670605   16631 out.go:298] Setting JSON to false
	I0415 11:33:20.670629   16631 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:20.670662   16631 notify.go:220] Checking for updates...
	I0415 11:33:20.670897   16631 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:20.670912   16631 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:20.671372   16631 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:20.721128   16631 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:20.721195   16631 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:20.721214   16631 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:20.721234   16631 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:20.721241   16631 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr": multinode-070000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr": multinode-070000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr": multinode-070000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "b2f32f06ad80a2c67de9c394cf2c2e2ce6ee97013fbb63c62c02b1663e392d61",
	        "Created": "2024-04-15T18:25:41.082088823Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (113.46367ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:33:20.887661   16637 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 node start m03 -v=7 --alsologtostderr: exit status 85 (154.00807ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:20.950912   16641 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:20.951133   16641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:20.951138   16641 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:20.951142   16641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:20.951315   16641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:20.951634   16641 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:20.951932   16641 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:20.973060   16641 out.go:177] 
	W0415 11:33:20.994090   16641 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0415 11:33:20.994113   16641 out.go:239] * 
	* 
	W0415 11:33:20.998681   16641 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 11:33:21.019813   16641 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0415 11:33:20.950912   16641 out.go:291] Setting OutFile to fd 1 ...
I0415 11:33:20.951133   16641 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:33:20.951138   16641 out.go:304] Setting ErrFile to fd 2...
I0415 11:33:20.951142   16641 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 11:33:20.951315   16641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
I0415 11:33:20.951634   16641 mustload.go:65] Loading cluster: multinode-070000
I0415 11:33:20.951932   16641 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 11:33:20.973060   16641 out.go:177] 
W0415 11:33:20.994090   16641 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0415 11:33:20.994113   16641 out.go:239] * 
* 
W0415 11:33:20.998681   16641 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0415 11:33:21.019813   16641 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-070000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr: exit status 7 (114.069545ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:21.104610   16643 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:21.105191   16643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:21.105200   16643 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:21.105207   16643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:21.105847   16643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:21.106111   16643 out.go:298] Setting JSON to false
	I0415 11:33:21.106145   16643 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:21.106170   16643 notify.go:220] Checking for updates...
	I0415 11:33:21.106408   16643 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:21.106423   16643 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:21.106816   16643 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:21.156082   16643 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:21.156139   16643 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:21.156160   16643 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:21.156182   16643 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:21.156190   16643 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr: exit status 7 (121.089954ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:21.749795   16647 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:21.750080   16647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:21.750085   16647 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:21.750089   16647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:21.750270   16647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:21.750455   16647 out.go:298] Setting JSON to false
	I0415 11:33:21.750478   16647 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:21.750507   16647 notify.go:220] Checking for updates...
	I0415 11:33:21.750746   16647 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:21.750763   16647 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:21.751150   16647 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:21.803517   16647 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:21.803587   16647 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:21.803605   16647 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:21.803626   16647 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:21.803634   16647 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr: exit status 7 (116.572392ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:23.529314   16653 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:23.529599   16653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:23.529604   16653 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:23.529608   16653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:23.529801   16653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:23.529975   16653 out.go:298] Setting JSON to false
	I0415 11:33:23.530003   16653 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:23.530041   16653 notify.go:220] Checking for updates...
	I0415 11:33:23.531358   16653 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:23.531385   16653 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:23.531834   16653 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:23.580748   16653 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:23.580809   16653 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:23.580830   16653 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:23.580849   16653 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:23.580857   16653 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr: exit status 7 (117.554131ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:25.799232   16660 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:25.799528   16660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:25.799533   16660 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:25.799537   16660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:25.799717   16660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:25.799895   16660 out.go:298] Setting JSON to false
	I0415 11:33:25.799920   16660 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:25.799954   16660 notify.go:220] Checking for updates...
	I0415 11:33:25.800192   16660 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:25.800208   16660 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:25.800613   16660 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:25.852222   16660 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:25.852295   16660 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:25.852314   16660 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:25.852335   16660 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:25.852342   16660 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr: exit status 7 (119.86785ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:27.857393   16669 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:27.857594   16669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:27.857599   16669 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:27.857603   16669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:27.857786   16669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:27.857958   16669 out.go:298] Setting JSON to false
	I0415 11:33:27.857981   16669 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:27.858012   16669 notify.go:220] Checking for updates...
	I0415 11:33:27.858241   16669 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:27.858258   16669 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:27.858698   16669 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:27.909582   16669 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:27.909630   16669 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:27.909649   16669 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:27.909665   16669 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:27.909674   16669 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr: exit status 7 (117.511087ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:32.896628   16673 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:32.896804   16673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:32.896809   16673 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:32.896813   16673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:32.896983   16673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:32.897144   16673 out.go:298] Setting JSON to false
	I0415 11:33:32.897166   16673 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:32.897200   16673 notify.go:220] Checking for updates...
	I0415 11:33:32.897432   16673 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:32.897447   16673 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:32.897835   16673 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:32.946730   16673 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:32.946805   16673 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:32.946823   16673 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:32.946846   16673 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:32.946854   16673 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr: exit status 7 (121.229514ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:41.738337   16683 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:41.738607   16683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:41.738613   16683 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:41.738617   16683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:41.738786   16683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:41.738959   16683 out.go:298] Setting JSON to false
	I0415 11:33:41.738987   16683 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:41.739014   16683 notify.go:220] Checking for updates...
	I0415 11:33:41.739247   16683 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:41.739263   16683 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:41.739634   16683 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:41.792611   16683 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:41.792669   16683 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:41.792697   16683 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:41.792717   16683 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:41.792724   16683 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr: exit status 7 (115.758099ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:33:49.938586   16691 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:33:49.938778   16691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:49.938783   16691 out.go:304] Setting ErrFile to fd 2...
	I0415 11:33:49.938786   16691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:33:49.938966   16691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:33:49.939141   16691 out.go:298] Setting JSON to false
	I0415 11:33:49.939162   16691 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:33:49.939205   16691 notify.go:220] Checking for updates...
	I0415 11:33:49.940480   16691 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:33:49.940501   16691 status.go:255] checking status of multinode-070000 ...
	I0415 11:33:49.940897   16691 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:33:49.989844   16691 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:33:49.989899   16691 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:33:49.989918   16691 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:33:49.989939   16691 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:33:49.989946   16691 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr: exit status 7 (116.206998ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:34:07.029656   16701 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:34:07.029945   16701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:34:07.029950   16701 out.go:304] Setting ErrFile to fd 2...
	I0415 11:34:07.029954   16701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:34:07.030718   16701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:34:07.031092   16701 out.go:298] Setting JSON to false
	I0415 11:34:07.031124   16701 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:34:07.031167   16701 notify.go:220] Checking for updates...
	I0415 11:34:07.031402   16701 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:34:07.031418   16701 status.go:255] checking status of multinode-070000 ...
	I0415 11:34:07.031808   16701 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:07.081673   16701 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:07.081731   16701 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:34:07.081755   16701 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:34:07.081773   16701 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:34:07.081781   16701 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-070000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "b2f32f06ad80a2c67de9c394cf2c2e2ce6ee97013fbb63c62c02b1663e392d61",
	        "Created": "2024-04-15T18:25:41.082088823Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (113.639857ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:34:07.248296   16707 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (46.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (792.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-070000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-070000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-070000: exit status 82 (15.696612061s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-070000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-070000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-070000 --wait=true -v=8 --alsologtostderr
E0415 11:34:26.003770    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 11:34:42.245960    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:34:42.801728    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 11:39:25.305018    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:39:42.253717    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:39:42.808949    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 11:44:42.252734    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:44:42.809006    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-070000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m56.651345811s)

                                                
                                                
-- stdout --
	* [multinode-070000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-070000" primary control-plane node in "multinode-070000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* docker "multinode-070000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-070000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:34:23.071260   16732 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:34:23.071431   16732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:34:23.071437   16732 out.go:304] Setting ErrFile to fd 2...
	I0415 11:34:23.071440   16732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:34:23.071605   16732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:34:23.073866   16732 out.go:298] Setting JSON to false
	I0415 11:34:23.096170   16732 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5634,"bootTime":1713200429,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 11:34:23.096272   16732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 11:34:23.118368   16732 out.go:177] * [multinode-070000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 11:34:23.160048   16732 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 11:34:23.160074   16732 notify.go:220] Checking for updates...
	I0415 11:34:23.201813   16732 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 11:34:23.243818   16732 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 11:34:23.265207   16732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 11:34:23.286090   16732 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	I0415 11:34:23.306881   16732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 11:34:23.328886   16732 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:34:23.329047   16732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 11:34:23.384796   16732 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 11:34:23.384969   16732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 11:34:23.484908   16732 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:135 SystemTime:2024-04-15 18:34:23.475133854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 11:34:23.527273   16732 out.go:177] * Using the docker driver based on existing profile
	I0415 11:34:23.548493   16732 start.go:297] selected driver: docker
	I0415 11:34:23.548524   16732 start.go:901] validating driver "docker" against &{Name:multinode-070000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-070000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:34:23.548636   16732 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 11:34:23.548839   16732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 11:34:23.650314   16732 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:135 SystemTime:2024-04-15 18:34:23.640447537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 11:34:23.653361   16732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 11:34:23.653429   16732 cni.go:84] Creating CNI manager for ""
	I0415 11:34:23.653437   16732 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 11:34:23.653507   16732 start.go:340] cluster config:
	{Name:multinode-070000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:34:23.696111   16732 out.go:177] * Starting "multinode-070000" primary control-plane node in "multinode-070000" cluster
	I0415 11:34:23.719247   16732 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 11:34:23.741264   16732 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 11:34:23.783212   16732 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 11:34:23.783262   16732 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 11:34:23.783288   16732 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 11:34:23.783325   16732 cache.go:56] Caching tarball of preloaded images
	I0415 11:34:23.783541   16732 preload.go:173] Found /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 11:34:23.783561   16732 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 11:34:23.783736   16732 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/multinode-070000/config.json ...
	I0415 11:34:23.834838   16732 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 11:34:23.834861   16732 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 11:34:23.834883   16732 cache.go:194] Successfully downloaded all kic artifacts
	I0415 11:34:23.834950   16732 start.go:360] acquireMachinesLock for multinode-070000: {Name:mkf862a823ebac9b2411d5d0611461e02835237d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:34:23.835049   16732 start.go:364] duration metric: took 80.042µs to acquireMachinesLock for "multinode-070000"
	I0415 11:34:23.835073   16732 start.go:96] Skipping create...Using existing machine configuration
	I0415 11:34:23.835085   16732 fix.go:54] fixHost starting: 
	I0415 11:34:23.835317   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:23.885238   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:23.885310   16732 fix.go:112] recreateIfNeeded on multinode-070000: state= err=unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:23.885327   16732 fix.go:117] machineExists: false. err=machine does not exist
	I0415 11:34:23.907268   16732 out.go:177] * docker "multinode-070000" container is missing, will recreate.
	I0415 11:34:23.949746   16732 delete.go:124] DEMOLISHING multinode-070000 ...
	I0415 11:34:23.949943   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:24.000527   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	W0415 11:34:24.000582   16732 stop.go:83] unable to get state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:24.000603   16732 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:24.000964   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:24.050484   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:24.050537   16732 delete.go:82] Unable to get host status for multinode-070000, assuming it has already been deleted: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:24.050617   16732 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-070000
	W0415 11:34:24.100534   16732 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-070000 returned with exit code 1
	I0415 11:34:24.100565   16732 kic.go:371] could not find the container multinode-070000 to remove it. will try anyways
	I0415 11:34:24.100631   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:24.150123   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	W0415 11:34:24.150173   16732 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:24.150250   16732 cli_runner.go:164] Run: docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0"
	W0415 11:34:24.199522   16732 cli_runner.go:211] docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 11:34:24.199551   16732 oci.go:650] error shutdown multinode-070000: docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:25.200012   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:25.252472   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:25.252516   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:25.252528   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:34:25.252572   16732 retry.go:31] will retry after 710.59262ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:25.965598   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:26.017398   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:26.017443   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:26.017461   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:34:26.017483   16732 retry.go:31] will retry after 540.661563ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:26.559041   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:26.610448   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:26.610489   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:26.610498   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:34:26.610520   16732 retry.go:31] will retry after 588.288008ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:27.201140   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:27.254089   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:27.254132   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:27.254145   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:34:27.254170   16732 retry.go:31] will retry after 1.652530346s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:28.907857   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:28.960983   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:28.961032   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:28.961042   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:34:28.961067   16732 retry.go:31] will retry after 2.449374122s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:31.412269   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:31.464824   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:31.464873   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:31.464883   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:34:31.464906   16732 retry.go:31] will retry after 2.33735805s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:33.802696   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:33.856405   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:33.856450   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:33.856460   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:34:33.856489   16732 retry.go:31] will retry after 3.422520524s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:37.280060   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:37.330827   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:37.330878   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:37.330888   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:34:37.330913   16732 retry.go:31] will retry after 5.006499359s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:42.338122   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:34:42.391943   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:34:42.391985   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:34:42.391993   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:34:42.392026   16732 oci.go:88] couldn't shut down multinode-070000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	 
	I0415 11:34:42.392101   16732 cli_runner.go:164] Run: docker rm -f -v multinode-070000
	I0415 11:34:42.441703   16732 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-070000
	W0415 11:34:42.490604   16732 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-070000 returned with exit code 1
	I0415 11:34:42.490718   16732 cli_runner.go:164] Run: docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 11:34:42.540919   16732 cli_runner.go:164] Run: docker network rm multinode-070000
	I0415 11:34:42.640745   16732 fix.go:124] Sleeping 1 second for extra luck!
	I0415 11:34:43.642019   16732 start.go:125] createHost starting for "" (driver="docker")
	I0415 11:34:43.664185   16732 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 11:34:43.664336   16732 start.go:159] libmachine.API.Create for "multinode-070000" (driver="docker")
	I0415 11:34:43.664364   16732 client.go:168] LocalClient.Create starting
	I0415 11:34:43.664555   16732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 11:34:43.664636   16732 main.go:141] libmachine: Decoding PEM data...
	I0415 11:34:43.664660   16732 main.go:141] libmachine: Parsing certificate...
	I0415 11:34:43.664744   16732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 11:34:43.664798   16732 main.go:141] libmachine: Decoding PEM data...
	I0415 11:34:43.664809   16732 main.go:141] libmachine: Parsing certificate...
	I0415 11:34:43.685422   16732 cli_runner.go:164] Run: docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 11:34:43.736621   16732 cli_runner.go:211] docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 11:34:43.736708   16732 network_create.go:281] running [docker network inspect multinode-070000] to gather additional debugging logs...
	I0415 11:34:43.736725   16732 cli_runner.go:164] Run: docker network inspect multinode-070000
	W0415 11:34:43.786153   16732 cli_runner.go:211] docker network inspect multinode-070000 returned with exit code 1
	I0415 11:34:43.786181   16732 network_create.go:284] error running [docker network inspect multinode-070000]: docker network inspect multinode-070000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-070000 not found
	I0415 11:34:43.786192   16732 network_create.go:286] output of [docker network inspect multinode-070000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-070000 not found
	
	** /stderr **
	I0415 11:34:43.786312   16732 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 11:34:43.837047   16732 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:34:43.838658   16732 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:34:43.839025   16732 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000af7ef0}
	I0415 11:34:43.839040   16732 network_create.go:124] attempt to create docker network multinode-070000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 11:34:43.839112   16732 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000
	I0415 11:34:43.924462   16732 network_create.go:108] docker network multinode-070000 192.168.67.0/24 created
	I0415 11:34:43.924511   16732 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-070000" container
	I0415 11:34:43.924617   16732 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 11:34:43.974313   16732 cli_runner.go:164] Run: docker volume create multinode-070000 --label name.minikube.sigs.k8s.io=multinode-070000 --label created_by.minikube.sigs.k8s.io=true
	I0415 11:34:44.024039   16732 oci.go:103] Successfully created a docker volume multinode-070000
	I0415 11:34:44.024151   16732 cli_runner.go:164] Run: docker run --rm --name multinode-070000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-070000 --entrypoint /usr/bin/test -v multinode-070000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 11:34:44.310917   16732 oci.go:107] Successfully prepared a docker volume multinode-070000
	I0415 11:34:44.310956   16732 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 11:34:44.310969   16732 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 11:34:44.311069   16732 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-070000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 11:40:43.672939   16732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:40:43.673074   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:43.723820   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:43.723934   16732 retry.go:31] will retry after 281.011727ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:44.007260   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:44.061400   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:44.061511   16732 retry.go:31] will retry after 550.909033ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:44.614778   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:44.668336   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:44.668451   16732 retry.go:31] will retry after 554.947342ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:45.225733   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:45.279062   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:40:45.279179   16732 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:40:45.279200   16732 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:45.279268   16732 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 11:40:45.279323   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:45.329517   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:45.329612   16732 retry.go:31] will retry after 362.108451ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:45.694028   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:45.746917   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:45.747014   16732 retry.go:31] will retry after 478.051702ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:46.227463   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:46.280808   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:46.280907   16732 retry.go:31] will retry after 653.572339ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:46.936878   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:46.991303   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:40:46.991409   16732 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:40:46.991427   16732 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:46.991441   16732 start.go:128] duration metric: took 6m3.34225692s to createHost
	I0415 11:40:46.991505   16732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:40:46.991565   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:47.040701   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:47.040789   16732 retry.go:31] will retry after 220.479415ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:47.261942   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:47.314606   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:47.314701   16732 retry.go:31] will retry after 251.992629ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:47.567027   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:47.619188   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:47.619282   16732 retry.go:31] will retry after 457.189218ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:48.078069   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:48.131660   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:48.131752   16732 retry.go:31] will retry after 721.230815ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:48.855416   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:48.908752   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:40:48.908845   16732 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:40:48.908873   16732 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:48.908926   16732 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 11:40:48.908979   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:48.958356   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:48.958448   16732 retry.go:31] will retry after 207.442395ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:49.168207   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:49.219258   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:49.219352   16732 retry.go:31] will retry after 341.914819ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:49.563582   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:49.616227   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:49.616318   16732 retry.go:31] will retry after 489.585473ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:50.108302   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:50.161226   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:40:50.161321   16732 retry.go:31] will retry after 698.615643ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:50.862225   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:40:50.913830   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:40:50.913935   16732 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:40:50.913947   16732 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:50.913960   16732 fix.go:56] duration metric: took 6m27.071757316s for fixHost
	I0415 11:40:50.913966   16732 start.go:83] releasing machines lock for "multinode-070000", held for 6m27.071787235s
	W0415 11:40:50.913982   16732 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 11:40:50.914045   16732 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 11:40:50.914051   16732 start.go:728] Will try again in 5 seconds ...
	I0415 11:40:55.914530   16732 start.go:360] acquireMachinesLock for multinode-070000: {Name:mkf862a823ebac9b2411d5d0611461e02835237d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:40:55.914732   16732 start.go:364] duration metric: took 159.812µs to acquireMachinesLock for "multinode-070000"
	I0415 11:40:55.914775   16732 start.go:96] Skipping create...Using existing machine configuration
	I0415 11:40:55.914785   16732 fix.go:54] fixHost starting: 
	I0415 11:40:55.915232   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:40:55.967928   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:40:55.967969   16732 fix.go:112] recreateIfNeeded on multinode-070000: state= err=unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:55.967985   16732 fix.go:117] machineExists: false. err=machine does not exist
	I0415 11:40:55.989830   16732 out.go:177] * docker "multinode-070000" container is missing, will recreate.
	I0415 11:40:56.032496   16732 delete.go:124] DEMOLISHING multinode-070000 ...
	I0415 11:40:56.032744   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:40:56.084152   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	W0415 11:40:56.084198   16732 stop.go:83] unable to get state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:56.084216   16732 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:56.084583   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:40:56.133947   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:40:56.133995   16732 delete.go:82] Unable to get host status for multinode-070000, assuming it has already been deleted: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:56.134074   16732 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-070000
	W0415 11:40:56.183307   16732 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-070000 returned with exit code 1
	I0415 11:40:56.183346   16732 kic.go:371] could not find the container multinode-070000 to remove it. will try anyways
	I0415 11:40:56.183416   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:40:56.232890   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	W0415 11:40:56.232948   16732 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:56.233029   16732 cli_runner.go:164] Run: docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0"
	W0415 11:40:56.282276   16732 cli_runner.go:211] docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 11:40:56.282304   16732 oci.go:650] error shutdown multinode-070000: docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:57.282765   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:40:57.334277   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:40:57.334322   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:57.334334   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:40:57.334356   16732 retry.go:31] will retry after 442.163174ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:57.778822   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:40:57.830676   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:40:57.830721   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:57.830729   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:40:57.830751   16732 retry.go:31] will retry after 672.694952ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:58.504646   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:40:58.557116   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:40:58.557159   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:58.557170   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:40:58.557204   16732 retry.go:31] will retry after 1.106429475s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:59.665000   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:40:59.716995   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:40:59.717047   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:40:59.717058   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:40:59.717081   16732 retry.go:31] will retry after 1.825658924s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:41:01.544015   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:41:01.597041   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:41:01.597089   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:41:01.597099   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:41:01.597121   16732 retry.go:31] will retry after 2.04438328s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:41:03.643140   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:41:03.695967   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:41:03.696010   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:41:03.696019   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:41:03.696042   16732 retry.go:31] will retry after 4.133903986s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:41:07.832273   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:41:07.884796   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:41:07.884850   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:41:07.884861   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:41:07.884887   16732 retry.go:31] will retry after 4.857372125s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:41:12.742733   16732 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:41:12.795715   16732 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:41:12.795758   16732 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:41:12.795767   16732 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:41:12.795796   16732 oci.go:88] couldn't shut down multinode-070000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	 
	I0415 11:41:12.795862   16732 cli_runner.go:164] Run: docker rm -f -v multinode-070000
	I0415 11:41:12.845498   16732 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-070000
	W0415 11:41:12.894645   16732 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-070000 returned with exit code 1
	I0415 11:41:12.894763   16732 cli_runner.go:164] Run: docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 11:41:12.944347   16732 cli_runner.go:164] Run: docker network rm multinode-070000
	I0415 11:41:13.052383   16732 fix.go:124] Sleeping 1 second for extra luck!
	I0415 11:41:14.053123   16732 start.go:125] createHost starting for "" (driver="docker")
	I0415 11:41:14.075072   16732 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 11:41:14.075196   16732 start.go:159] libmachine.API.Create for "multinode-070000" (driver="docker")
	I0415 11:41:14.075214   16732 client.go:168] LocalClient.Create starting
	I0415 11:41:14.075368   16732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 11:41:14.075434   16732 main.go:141] libmachine: Decoding PEM data...
	I0415 11:41:14.075450   16732 main.go:141] libmachine: Parsing certificate...
	I0415 11:41:14.075507   16732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 11:41:14.075556   16732 main.go:141] libmachine: Decoding PEM data...
	I0415 11:41:14.075571   16732 main.go:141] libmachine: Parsing certificate...
	I0415 11:41:14.096293   16732 cli_runner.go:164] Run: docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 11:41:14.146946   16732 cli_runner.go:211] docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 11:41:14.147036   16732 network_create.go:281] running [docker network inspect multinode-070000] to gather additional debugging logs...
	I0415 11:41:14.147054   16732 cli_runner.go:164] Run: docker network inspect multinode-070000
	W0415 11:41:14.197256   16732 cli_runner.go:211] docker network inspect multinode-070000 returned with exit code 1
	I0415 11:41:14.197292   16732 network_create.go:284] error running [docker network inspect multinode-070000]: docker network inspect multinode-070000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-070000 not found
	I0415 11:41:14.197305   16732 network_create.go:286] output of [docker network inspect multinode-070000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-070000 not found
	
	** /stderr **
	I0415 11:41:14.197456   16732 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 11:41:14.248687   16732 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:41:14.250283   16732 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:41:14.251835   16732 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:41:14.252196   16732 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00221ee50}
	I0415 11:41:14.252210   16732 network_create.go:124] attempt to create docker network multinode-070000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 11:41:14.252278   16732 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000
	W0415 11:41:14.301697   16732 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000 returned with exit code 1
	W0415 11:41:14.301732   16732 network_create.go:149] failed to create docker network multinode-070000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 11:41:14.301754   16732 network_create.go:116] failed to create docker network multinode-070000 192.168.76.0/24, will retry: subnet is taken
	I0415 11:41:14.303354   16732 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:41:14.303723   16732 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00221ff10}
	I0415 11:41:14.303736   16732 network_create.go:124] attempt to create docker network multinode-070000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 11:41:14.303800   16732 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000
	I0415 11:41:14.389648   16732 network_create.go:108] docker network multinode-070000 192.168.85.0/24 created
	I0415 11:41:14.389688   16732 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-070000" container
	I0415 11:41:14.389794   16732 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 11:41:14.440405   16732 cli_runner.go:164] Run: docker volume create multinode-070000 --label name.minikube.sigs.k8s.io=multinode-070000 --label created_by.minikube.sigs.k8s.io=true
	I0415 11:41:14.489332   16732 oci.go:103] Successfully created a docker volume multinode-070000
	I0415 11:41:14.489449   16732 cli_runner.go:164] Run: docker run --rm --name multinode-070000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-070000 --entrypoint /usr/bin/test -v multinode-070000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 11:41:14.797283   16732 oci.go:107] Successfully prepared a docker volume multinode-070000
	I0415 11:41:14.797331   16732 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 11:41:14.797350   16732 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 11:41:14.797448   16732 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-070000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 11:47:14.075133   16732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:47:14.075301   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:14.128873   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:14.128992   16732 retry.go:31] will retry after 193.089571ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:14.322726   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:14.376084   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:14.376202   16732 retry.go:31] will retry after 358.049137ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:14.736652   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:14.789827   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:14.789920   16732 retry.go:31] will retry after 512.133142ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:15.304512   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:15.357590   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:47:15.357693   16732 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:47:15.357716   16732 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:15.357783   16732 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 11:47:15.357841   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:15.408300   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:15.408403   16732 retry.go:31] will retry after 202.714924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:15.613382   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:15.666763   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:15.666863   16732 retry.go:31] will retry after 380.697762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:16.049206   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:16.102070   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:16.102167   16732 retry.go:31] will retry after 523.667334ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:16.627517   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:16.680756   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:47:16.680872   16732 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:47:16.680894   16732 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:16.680901   16732 start.go:128] duration metric: took 6m2.629485496s to createHost
	I0415 11:47:16.680969   16732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 11:47:16.681029   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:16.729760   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:16.729861   16732 retry.go:31] will retry after 218.240233ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:16.948748   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:17.002617   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:17.002707   16732 retry.go:31] will retry after 395.390976ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:17.398750   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:17.451668   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:17.451769   16732 retry.go:31] will retry after 810.341065ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:18.262625   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:18.314707   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:47:18.314816   16732 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:47:18.314832   16732 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:18.314889   16732 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 11:47:18.314944   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:18.364612   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:18.364702   16732 retry.go:31] will retry after 163.888954ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:18.530940   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:18.583810   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:18.583902   16732 retry.go:31] will retry after 203.317377ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:18.789557   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:18.842274   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	I0415 11:47:18.842368   16732 retry.go:31] will retry after 623.675722ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:19.466539   16732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000
	W0415 11:47:19.518947   16732 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000 returned with exit code 1
	W0415 11:47:19.519048   16732 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	W0415 11:47:19.519073   16732 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-070000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-070000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:19.519086   16732 fix.go:56] duration metric: took 6m23.606146574s for fixHost
	I0415 11:47:19.519092   16732 start.go:83] releasing machines lock for "multinode-070000", held for 6m23.606190733s
	W0415 11:47:19.519168   16732 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-070000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-070000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 11:47:19.561652   16732 out.go:177] 
	W0415 11:47:19.582578   16732 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 11:47:19.582643   16732 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 11:47:19.582672   16732 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 11:47:19.603590   16732 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-070000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-070000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "8ff6dc84bb27a6be1189f37cf7ce1cdd647a5adf7d86d36f8fb955ceeb0bb4a5",
	        "Created": "2024-04-15T18:41:14.35039401Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (113.673631ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:47:19.912709   17287 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (792.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 node delete m03: exit status 80 (201.233469ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-070000 host status: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-070000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr: exit status 7 (114.081203ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:47:20.177262   17295 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:47:20.177548   17295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:47:20.177554   17295 out.go:304] Setting ErrFile to fd 2...
	I0415 11:47:20.177558   17295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:47:20.177744   17295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:47:20.177931   17295 out.go:298] Setting JSON to false
	I0415 11:47:20.177954   17295 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:47:20.177998   17295 notify.go:220] Checking for updates...
	I0415 11:47:20.178231   17295 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:47:20.178249   17295 status.go:255] checking status of multinode-070000 ...
	I0415 11:47:20.178706   17295 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:20.228267   17295 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:20.228313   17295 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:47:20.228339   17295 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:47:20.228358   17295 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:47:20.228365   17295 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "8ff6dc84bb27a6be1189f37cf7ce1cdd647a5adf7d86d36f8fb955ceeb0bb4a5",
	        "Created": "2024-04-15T18:41:14.35039401Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (113.532263ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:47:20.394887   17301 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (13.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 stop: exit status 82 (12.811027922s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	* Stopping node "multinode-070000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-070000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-070000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status: exit status 7 (113.965431ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:47:33.320109   17323 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:47:33.320121   17323 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr: exit status 7 (113.971537ms)

                                                
                                                
-- stdout --
	multinode-070000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:47:33.383147   17327 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:47:33.383341   17327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:47:33.383346   17327 out.go:304] Setting ErrFile to fd 2...
	I0415 11:47:33.383350   17327 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:47:33.383530   17327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:47:33.383710   17327 out.go:298] Setting JSON to false
	I0415 11:47:33.383733   17327 mustload.go:65] Loading cluster: multinode-070000
	I0415 11:47:33.383768   17327 notify.go:220] Checking for updates...
	I0415 11:47:33.383997   17327 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:47:33.384013   17327 status.go:255] checking status of multinode-070000 ...
	I0415 11:47:33.384398   17327 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:33.434094   17327 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:33.434157   17327 status.go:330] multinode-070000 host status = "" (err=state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	)
	I0415 11:47:33.434180   17327 status.go:257] multinode-070000 status: &{Name:multinode-070000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 11:47:33.434200   17327 status.go:260] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	E0415 11:47:33.434207   17327 status.go:263] The "multinode-070000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr": multinode-070000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-070000 status --alsologtostderr": multinode-070000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "8ff6dc84bb27a6be1189f37cf7ce1cdd647a5adf7d86d36f8fb955ceeb0bb4a5",
	        "Created": "2024-04-15T18:41:14.35039401Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (114.924064ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:47:33.605863   17333 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (13.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (98.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-070000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-070000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m38.577580359s)

                                                
                                                
-- stdout --
	* [multinode-070000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-070000" primary control-plane node in "multinode-070000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* docker "multinode-070000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 11:47:33.667896   17337 out.go:291] Setting OutFile to fd 1 ...
	I0415 11:47:33.668057   17337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:47:33.668068   17337 out.go:304] Setting ErrFile to fd 2...
	I0415 11:47:33.668072   17337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 11:47:33.668254   17337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 11:47:33.669771   17337 out.go:298] Setting JSON to false
	I0415 11:47:33.692709   17337 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6424,"bootTime":1713200429,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 11:47:33.692793   17337 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 11:47:33.714221   17337 out.go:177] * [multinode-070000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 11:47:33.756034   17337 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 11:47:33.756182   17337 notify.go:220] Checking for updates...
	I0415 11:47:33.777941   17337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 11:47:33.798744   17337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 11:47:33.819858   17337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 11:47:33.841025   17337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	I0415 11:47:33.861865   17337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 11:47:33.883654   17337 config.go:182] Loaded profile config "multinode-070000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 11:47:33.884421   17337 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 11:47:33.940267   17337 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 11:47:33.940437   17337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 11:47:34.040359   17337 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:155 SystemTime:2024-04-15 18:47:34.030219697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 11:47:34.082670   17337 out.go:177] * Using the docker driver based on existing profile
	I0415 11:47:34.103851   17337 start.go:297] selected driver: docker
	I0415 11:47:34.103883   17337 start.go:901] validating driver "docker" against &{Name:multinode-070000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-070000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:47:34.104025   17337 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 11:47:34.104227   17337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 11:47:34.205177   17337 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:155 SystemTime:2024-04-15 18:47:34.195307134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 11:47:34.208138   17337 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 11:47:34.208205   17337 cni.go:84] Creating CNI manager for ""
	I0415 11:47:34.208214   17337 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 11:47:34.208276   17337 start.go:340] cluster config:
	{Name:multinode-070000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 11:47:34.250045   17337 out.go:177] * Starting "multinode-070000" primary control-plane node in "multinode-070000" cluster
	I0415 11:47:34.271055   17337 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 11:47:34.292254   17337 out.go:177] * Pulling base image v0.0.43-1713176859-18634 ...
	I0415 11:47:34.333982   17337 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 11:47:34.334024   17337 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 11:47:34.334065   17337 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 11:47:34.334085   17337 cache.go:56] Caching tarball of preloaded images
	I0415 11:47:34.334314   17337 preload.go:173] Found /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 11:47:34.334332   17337 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 11:47:34.335231   17337 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/multinode-070000/config.json ...
	I0415 11:47:34.385531   17337 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon, skipping pull
	I0415 11:47:34.385553   17337 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in daemon, skipping load
	I0415 11:47:34.385581   17337 cache.go:194] Successfully downloaded all kic artifacts
	I0415 11:47:34.385631   17337 start.go:360] acquireMachinesLock for multinode-070000: {Name:mkf862a823ebac9b2411d5d0611461e02835237d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 11:47:34.385733   17337 start.go:364] duration metric: took 81.748µs to acquireMachinesLock for "multinode-070000"
	I0415 11:47:34.385755   17337 start.go:96] Skipping create...Using existing machine configuration
	I0415 11:47:34.385768   17337 fix.go:54] fixHost starting: 
	I0415 11:47:34.386029   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:34.434900   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:34.434977   17337 fix.go:112] recreateIfNeeded on multinode-070000: state= err=unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:34.434995   17337 fix.go:117] machineExists: false. err=machine does not exist
	I0415 11:47:34.456812   17337 out.go:177] * docker "multinode-070000" container is missing, will recreate.
	I0415 11:47:34.478468   17337 delete.go:124] DEMOLISHING multinode-070000 ...
	I0415 11:47:34.478681   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:34.530349   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	W0415 11:47:34.530397   17337 stop.go:83] unable to get state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:34.530413   17337 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:34.530785   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:34.580508   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:34.580557   17337 delete.go:82] Unable to get host status for multinode-070000, assuming it has already been deleted: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:34.580659   17337 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-070000
	W0415 11:47:34.629584   17337 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-070000 returned with exit code 1
	I0415 11:47:34.629621   17337 kic.go:371] could not find the container multinode-070000 to remove it. will try anyways
	I0415 11:47:34.629692   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:34.679178   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	W0415 11:47:34.679231   17337 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:34.679305   17337 cli_runner.go:164] Run: docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0"
	W0415 11:47:34.728617   17337 cli_runner.go:211] docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 11:47:34.728646   17337 oci.go:650] error shutdown multinode-070000: docker exec --privileged -t multinode-070000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:35.728973   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:35.781595   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:35.781638   17337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:35.781646   17337 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:47:35.781685   17337 retry.go:31] will retry after 458.463297ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:36.241058   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:36.294094   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:36.294141   17337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:36.294150   17337 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:47:36.294182   17337 retry.go:31] will retry after 746.726121ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:37.041802   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:37.096637   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:37.096679   17337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:37.096700   17337 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:47:37.096725   17337 retry.go:31] will retry after 1.275321479s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:38.372311   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:38.424284   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:38.424336   17337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:38.424345   17337 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:47:38.424370   17337 retry.go:31] will retry after 997.585797ms: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:39.424315   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:39.476502   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:39.476543   17337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:39.476551   17337 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:47:39.476578   17337 retry.go:31] will retry after 2.574225046s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:42.052296   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:42.104935   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:42.104979   17337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:42.104990   17337 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:47:42.105019   17337 retry.go:31] will retry after 5.355667469s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:47.461047   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:47.514033   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:47.514076   17337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:47.514087   17337 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:47:47.514112   17337 retry.go:31] will retry after 5.247286131s: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:52.761725   17337 cli_runner.go:164] Run: docker container inspect multinode-070000 --format={{.State.Status}}
	W0415 11:47:52.814109   17337 cli_runner.go:211] docker container inspect multinode-070000 --format={{.State.Status}} returned with exit code 1
	I0415 11:47:52.814160   17337 oci.go:662] temporary error verifying shutdown: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	I0415 11:47:52.814177   17337 oci.go:664] temporary error: container multinode-070000 status is  but expect it to be exited
	I0415 11:47:52.814205   17337 oci.go:88] couldn't shut down multinode-070000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000
	 
	I0415 11:47:52.814278   17337 cli_runner.go:164] Run: docker rm -f -v multinode-070000
	I0415 11:47:52.863977   17337 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-070000
	W0415 11:47:52.912983   17337 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-070000 returned with exit code 1
	I0415 11:47:52.913099   17337 cli_runner.go:164] Run: docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 11:47:52.962614   17337 cli_runner.go:164] Run: docker network rm multinode-070000
	I0415 11:47:53.083900   17337 fix.go:124] Sleeping 1 second for extra luck!
	I0415 11:47:54.084590   17337 start.go:125] createHost starting for "" (driver="docker")
	I0415 11:47:54.106461   17337 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 11:47:54.106653   17337 start.go:159] libmachine.API.Create for "multinode-070000" (driver="docker")
	I0415 11:47:54.106687   17337 client.go:168] LocalClient.Create starting
	I0415 11:47:54.106938   17337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/ca.pem
	I0415 11:47:54.107030   17337 main.go:141] libmachine: Decoding PEM data...
	I0415 11:47:54.107064   17337 main.go:141] libmachine: Parsing certificate...
	I0415 11:47:54.107157   17337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18634-8183/.minikube/certs/cert.pem
	I0415 11:47:54.107231   17337 main.go:141] libmachine: Decoding PEM data...
	I0415 11:47:54.107245   17337 main.go:141] libmachine: Parsing certificate...
	I0415 11:47:54.107941   17337 cli_runner.go:164] Run: docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 11:47:54.158635   17337 cli_runner.go:211] docker network inspect multinode-070000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 11:47:54.158726   17337 network_create.go:281] running [docker network inspect multinode-070000] to gather additional debugging logs...
	I0415 11:47:54.158743   17337 cli_runner.go:164] Run: docker network inspect multinode-070000
	W0415 11:47:54.208491   17337 cli_runner.go:211] docker network inspect multinode-070000 returned with exit code 1
	I0415 11:47:54.208515   17337 network_create.go:284] error running [docker network inspect multinode-070000]: docker network inspect multinode-070000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-070000 not found
	I0415 11:47:54.208532   17337 network_create.go:286] output of [docker network inspect multinode-070000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-070000 not found
	
	** /stderr **
	I0415 11:47:54.208643   17337 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 11:47:54.259739   17337 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:47:54.261352   17337 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 11:47:54.261728   17337 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00251a3e0}
	I0415 11:47:54.261752   17337 network_create.go:124] attempt to create docker network multinode-070000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 11:47:54.261819   17337 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-070000 multinode-070000
	I0415 11:47:54.348037   17337 network_create.go:108] docker network multinode-070000 192.168.67.0/24 created
	I0415 11:47:54.348075   17337 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-070000" container
	I0415 11:47:54.348178   17337 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 11:47:54.398115   17337 cli_runner.go:164] Run: docker volume create multinode-070000 --label name.minikube.sigs.k8s.io=multinode-070000 --label created_by.minikube.sigs.k8s.io=true
	I0415 11:47:54.447113   17337 oci.go:103] Successfully created a docker volume multinode-070000
	I0415 11:47:54.447222   17337 cli_runner.go:164] Run: docker run --rm --name multinode-070000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-070000 --entrypoint /usr/bin/test -v multinode-070000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -d /var/lib
	I0415 11:47:54.753997   17337 oci.go:107] Successfully prepared a docker volume multinode-070000
	I0415 11:47:54.754055   17337 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 11:47:54.754068   17337 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 11:47:54.754165   17337 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-070000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-070000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-070000
helpers_test.go:235: (dbg) docker inspect multinode-070000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-070000",
	        "Id": "a68d5f9dace79ca60fa503c6514b86a43cb05d4fc780fc41bbacf2e91c5e26b9",
	        "Created": "2024-04-15T18:47:54.308960041Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-070000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-070000 -n multinode-070000: exit status 7 (114.856989ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:49:12.354784   17460 status.go:249] status error: host: state: unknown state "multinode-070000": docker container inspect multinode-070000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-070000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-070000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (98.75s)

                                                
                                    
x
+
TestScheduledStopUnix (300.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-647000 --memory=2048 --driver=docker 
E0415 11:51:06.008283    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 11:54:42.295311    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:54:42.852712    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-647000 --memory=2048 --driver=docker : signal: killed (5m0.004677116s)

                                                
                                                
-- stdout --
	* [scheduled-stop-647000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-647000" primary control-plane node in "scheduled-stop-647000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-647000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-647000" primary control-plane node in "scheduled-stop-647000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-15 11:56:04.640566 -0700 PDT m=+4770.269835969
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-647000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-647000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-647000",
	        "Id": "29030b29266d0e344f69747c806d2525046c6b35dfb662d826a0ff2fa1e1d8e0",
	        "Created": "2024-04-15T18:51:05.651490706Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-647000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-647000 -n scheduled-stop-647000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-647000 -n scheduled-stop-647000: exit status 7 (114.350119ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 11:56:04.806475   17974 status.go:249] status error: host: state: unknown state "scheduled-stop-647000": docker container inspect scheduled-stop-647000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-647000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-647000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-647000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-647000
E0415 11:56:05.351345    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
--- FAIL: TestScheduledStopUnix (300.90s)

                                                
                                    
x
+
TestSkaffold (300.99s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3161203551 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3161203551 version: (1.445900661s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-422000 --memory=2600 --driver=docker 
E0415 11:59:42.297809    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:59:42.855157    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-422000 --memory=2600 --driver=docker : signal: killed (4m57.00429558s)

                                                
                                                
-- stdout --
	* [skaffold-422000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-422000" primary control-plane node in "skaffold-422000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-422000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-422000" primary control-plane node in "skaffold-422000" cluster
	* Pulling base image v0.0.43-1713176859-18634 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-04-15 12:01:05.540843 -0700 PDT m=+5071.170467251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-422000
helpers_test.go:235: (dbg) docker inspect skaffold-422000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-422000",
	        "Id": "11ff591ef695c268a8707c48b920a16c0411de2f44c9fd29afa17e311a3b3bc8",
	        "Created": "2024-04-15T18:56:09.571828571Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-422000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-422000 -n skaffold-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-422000 -n skaffold-422000: exit status 7 (113.067483ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 12:01:05.705178   18111 status.go:249] status error: host: state: unknown state "skaffold-422000": docker container inspect skaffold-422000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-422000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-422000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-422000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-422000
--- FAIL: TestSkaffold (300.99s)

                                                
                                    
x
+
TestInsufficientStorage (300.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-456000 --memory=2048 --output=json --wait=true --driver=docker 
E0415 12:04:42.297721    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 12:04:42.853959    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-456000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.004176756s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d8e4110-10ca-4374-ae05-f14cdbd77283","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-456000] minikube v1.33.0-beta.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a539c07b-165f-4fdc-90b2-49de2ead3993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18634"}}
	{"specversion":"1.0","id":"30950c7a-1e14-44e3-b271-32b9f3076b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig"}}
	{"specversion":"1.0","id":"c43ab95e-9ca1-4c14-8db6-8fb8fbd6bc97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"472ba61b-2b65-4c56-a754-bbeb6c9b41c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dc54f4ed-b45e-4333-ac8e-245cd7362aad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube"}}
	{"specversion":"1.0","id":"636bc1dc-30ea-4734-ac21-6f586929177e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ffd3c975-cb32-458b-b804-88290483910f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b9a517bd-5a17-422a-a830-7f1f09e1788a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"da9388ea-c8c1-493f-b93f-5a9e9d320f95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4b8ebe0-7b15-4ba3-823c-e67d27f3477b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"cf93d9d6-cc33-4348-8e61-c5642823fd14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-456000\" primary control-plane node in \"insufficient-storage-456000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec59c59e-2bab-40d3-af6b-331d95f22849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713176859-18634 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e70a1ac6-c7c6-41ce-9c8b-099d977fa2ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-456000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-456000 --output=json --layout=cluster: context deadline exceeded (882ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-456000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-456000
--- FAIL: TestInsufficientStorage (300.73s)

                                                
                                    

Test pass (170/211)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.87
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.63
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.29.3/json-events 7.1
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.3
18 TestDownloadOnly/v1.29.3/DeleteAll 0.63
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.38
21 TestDownloadOnly/v1.30.0-rc.2/json-events 14.33
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.3
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.63
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.37
29 TestDownloadOnlyKic 1.84
30 TestBinaryMirror 1.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
36 TestAddons/Setup 141.45
40 TestAddons/parallel/InspektorGadget 10.88
41 TestAddons/parallel/MetricsServer 5.96
42 TestAddons/parallel/HelmTiller 10.81
44 TestAddons/parallel/CSI 76.65
45 TestAddons/parallel/Headlamp 13.31
46 TestAddons/parallel/CloudSpanner 5.66
47 TestAddons/parallel/LocalPath 53.93
48 TestAddons/parallel/NvidiaDevicePlugin 5.66
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.1
53 TestAddons/StoppedEnableDisable 11.71
61 TestHyperKitDriverInstallOrUpdate 8.05
64 TestErrorSpam/setup 20.79
65 TestErrorSpam/start 2.14
66 TestErrorSpam/status 1.24
67 TestErrorSpam/pause 1.73
68 TestErrorSpam/unpause 1.83
69 TestErrorSpam/stop 2.22
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 37.98
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 27.61
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
81 TestFunctional/serial/CacheCmd/cache/add_local 1.61
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
83 TestFunctional/serial/CacheCmd/cache/list 0.09
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
86 TestFunctional/serial/CacheCmd/cache/delete 0.18
87 TestFunctional/serial/MinikubeKubectlCmd 0.96
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.3
89 TestFunctional/serial/ExtraConfig 41.94
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3.23
92 TestFunctional/serial/LogsFileCmd 3.12
93 TestFunctional/serial/InvalidService 4.28
95 TestFunctional/parallel/ConfigCmd 0.58
96 TestFunctional/parallel/DashboardCmd 11.66
97 TestFunctional/parallel/DryRun 1.35
98 TestFunctional/parallel/InternationalLanguage 0.7
99 TestFunctional/parallel/StatusCmd 1.25
104 TestFunctional/parallel/AddonsCmd 0.27
105 TestFunctional/parallel/PersistentVolumeClaim 27.61
107 TestFunctional/parallel/SSHCmd 0.79
108 TestFunctional/parallel/CpCmd 2.66
109 TestFunctional/parallel/MySQL 27.21
110 TestFunctional/parallel/FileSync 0.39
111 TestFunctional/parallel/CertSync 2.38
115 TestFunctional/parallel/NodeLabels 0.05
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
119 TestFunctional/parallel/License 0.38
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
131 TestFunctional/parallel/ServiceCmd/DeployApp 8.12
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
133 TestFunctional/parallel/ProfileCmd/profile_list 0.55
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
135 TestFunctional/parallel/MountCmd/any-port 7.56
136 TestFunctional/parallel/ServiceCmd/List 0.63
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
138 TestFunctional/parallel/ServiceCmd/HTTPS 15
139 TestFunctional/parallel/MountCmd/specific-port 2.14
140 TestFunctional/parallel/MountCmd/VerifyCleanup 2.7
141 TestFunctional/parallel/ServiceCmd/Format 15
142 TestFunctional/parallel/Version/short 0.11
143 TestFunctional/parallel/Version/components 0.63
144 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
145 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
146 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
147 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
148 TestFunctional/parallel/ImageCommands/ImageBuild 3.54
149 TestFunctional/parallel/ImageCommands/Setup 2.13
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.8
151 TestFunctional/parallel/ServiceCmd/URL 15
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.33
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.92
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.09
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.98
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.22
158 TestFunctional/parallel/DockerEnv/bash 1.49
159 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
160 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
161 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
162 TestFunctional/delete_addon-resizer_images 0.13
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestMultiControlPlane/serial/StartCluster 106.58
169 TestMultiControlPlane/serial/DeployApp 5.08
170 TestMultiControlPlane/serial/PingHostFromPods 1.35
171 TestMultiControlPlane/serial/AddWorkerNode 20.03
172 TestMultiControlPlane/serial/NodeLabels 0.05
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.12
174 TestMultiControlPlane/serial/CopyFile 24.62
175 TestMultiControlPlane/serial/StopSecondaryNode 11.94
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
177 TestMultiControlPlane/serial/RestartSecondaryNode 130.92
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.13
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 188.67
180 TestMultiControlPlane/serial/DeleteSecondaryNode 11.69
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
182 TestMultiControlPlane/serial/StopCluster 32.81
183 TestMultiControlPlane/serial/RestartCluster 117.3
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
185 TestMultiControlPlane/serial/AddSecondaryNode 37.99
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.19
189 TestImageBuild/serial/Setup 21.32
190 TestImageBuild/serial/NormalBuild 1.73
191 TestImageBuild/serial/BuildWithBuildArg 1.03
192 TestImageBuild/serial/BuildWithDockerIgnore 0.86
193 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.86
197 TestJSONOutput/start/Command 75.49
198 TestJSONOutput/start/Audit 0
200 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/pause/Command 0.57
204 TestJSONOutput/pause/Audit 0
206 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/unpause/Command 0.6
210 TestJSONOutput/unpause/Audit 0
212 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
215 TestJSONOutput/stop/Command 10.71
216 TestJSONOutput/stop/Audit 0
218 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
219 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
220 TestErrorJSONOutput 0.78
222 TestKicCustomNetwork/create_custom_network 23.68
223 TestKicCustomNetwork/use_default_bridge_network 24.33
224 TestKicExistingNetwork 22.95
225 TestKicCustomSubnet 22.68
226 TestKicStaticIP 24.81
227 TestMainNoArgs 0.09
228 TestMinikubeProfile 48.56
231 TestMountStart/serial/StartWithMountFirst 7.84
232 TestMountStart/serial/VerifyMountFirst 0.39
233 TestMountStart/serial/StartWithMountSecond 7.66
253 TestPreload 111.33
274 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 11.76
275 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.46
x
+
TestDownloadOnly/v1.20.0/json-events (16.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-841000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-841000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (16.865816271s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-841000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-841000: exit status 85 (295.375758ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-841000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT |          |
	|         | -p download-only-841000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 10:36:34
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 10:36:34.343656    8642 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:36:34.343839    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:36:34.343845    8642 out.go:304] Setting ErrFile to fd 2...
	I0415 10:36:34.343848    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:36:34.344040    8642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	W0415 10:36:34.344144    8642 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18634-8183/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18634-8183/.minikube/config/config.json: no such file or directory
	I0415 10:36:34.345912    8642 out.go:298] Setting JSON to true
	I0415 10:36:34.368014    8642 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2165,"bootTime":1713200429,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 10:36:34.368109    8642 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 10:36:34.390285    8642 out.go:97] [download-only-841000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 10:36:34.412093    8642 out.go:169] MINIKUBE_LOCATION=18634
	I0415 10:36:34.390488    8642 notify.go:220] Checking for updates...
	W0415 10:36:34.390523    8642 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball: no such file or directory
	I0415 10:36:34.454810    8642 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 10:36:34.475833    8642 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 10:36:34.496816    8642 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:36:34.518078    8642 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	W0415 10:36:34.559813    8642 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 10:36:34.560281    8642 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:36:34.615966    8642 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 10:36:34.616113    8642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:36:34.716779    8642 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-04-15 17:36:34.706826587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 10:36:34.737759    8642 out.go:97] Using the docker driver based on user configuration
	I0415 10:36:34.737787    8642 start.go:297] selected driver: docker
	I0415 10:36:34.737798    8642 start.go:901] validating driver "docker" against <nil>
	I0415 10:36:34.737947    8642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:36:34.837564    8642 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-04-15 17:36:34.827486133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 10:36:34.837746    8642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 10:36:34.840580    8642 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0415 10:36:34.840724    8642 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 10:36:34.862245    8642 out.go:169] Using Docker Desktop driver with root privileges
	I0415 10:36:34.883067    8642 cni.go:84] Creating CNI manager for ""
	I0415 10:36:34.883111    8642 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 10:36:34.883312    8642 start.go:340] cluster config:
	{Name:download-only-841000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:36:34.905095    8642 out.go:97] Starting "download-only-841000" primary control-plane node in "download-only-841000" cluster
	I0415 10:36:34.905160    8642 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 10:36:34.926200    8642 out.go:97] Pulling base image v0.0.43-1713176859-18634 ...
	I0415 10:36:34.926283    8642 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 10:36:34.926363    8642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 10:36:34.976587    8642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b to local cache
	I0415 10:36:34.976845    8642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory
	I0415 10:36:34.976979    8642 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b to local cache
	I0415 10:36:34.980322    8642 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 10:36:34.980336    8642 cache.go:56] Caching tarball of preloaded images
	I0415 10:36:34.980488    8642 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 10:36:35.001870    8642 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 10:36:35.001882    8642 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 10:36:35.078976    8642 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 10:36:41.302655    8642 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 10:36:41.302855    8642 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 10:36:41.855189    8642 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 10:36:41.855440    8642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/download-only-841000/config.json ...
	I0415 10:36:41.855464    8642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/download-only-841000/config.json: {Name:mk1ddac539e856262d79d5e7e31d72459f8a02ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:36:41.855746    8642 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 10:36:41.856044    8642 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-841000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-841000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-841000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (7.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-422000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-422000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker : (7.097115056s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (7.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-422000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-422000: exit status 85 (298.759785ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-841000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT |                     |
	|         | -p download-only-841000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT | 15 Apr 24 10:36 PDT |
	| delete  | -p download-only-841000        | download-only-841000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT | 15 Apr 24 10:36 PDT |
	| start   | -o=json --download-only        | download-only-422000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT |                     |
	|         | -p download-only-422000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 10:36:52
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 10:36:52.516055    8710 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:36:52.516228    8710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:36:52.516234    8710 out.go:304] Setting ErrFile to fd 2...
	I0415 10:36:52.516238    8710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:36:52.516423    8710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 10:36:52.517863    8710 out.go:298] Setting JSON to true
	I0415 10:36:52.539851    8710 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2183,"bootTime":1713200429,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 10:36:52.539985    8710 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 10:36:52.561235    8710 out.go:97] [download-only-422000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 10:36:52.581831    8710 out.go:169] MINIKUBE_LOCATION=18634
	I0415 10:36:52.561339    8710 notify.go:220] Checking for updates...
	I0415 10:36:52.624811    8710 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 10:36:52.645711    8710 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 10:36:52.666779    8710 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:36:52.688053    8710 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	W0415 10:36:52.729937    8710 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 10:36:52.730298    8710 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:36:52.784737    8710 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 10:36:52.784875    8710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:36:52.887083    8710 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-04-15 17:36:52.877063242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 10:36:52.908746    8710 out.go:97] Using the docker driver based on user configuration
	I0415 10:36:52.908794    8710 start.go:297] selected driver: docker
	I0415 10:36:52.908810    8710 start.go:901] validating driver "docker" against <nil>
	I0415 10:36:52.909027    8710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:36:53.009896    8710 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-04-15 17:36:53.00062582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 10:36:53.010065    8710 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 10:36:53.012935    8710 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0415 10:36:53.013075    8710 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 10:36:53.034534    8710 out.go:169] Using Docker Desktop driver with root privileges
	I0415 10:36:53.055637    8710 cni.go:84] Creating CNI manager for ""
	I0415 10:36:53.055670    8710 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 10:36:53.055696    8710 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 10:36:53.055795    8710 start.go:340] cluster config:
	{Name:download-only-422000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-422000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:36:53.076530    8710 out.go:97] Starting "download-only-422000" primary control-plane node in "download-only-422000" cluster
	I0415 10:36:53.076569    8710 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 10:36:53.097351    8710 out.go:97] Pulling base image v0.0.43-1713176859-18634 ...
	I0415 10:36:53.097395    8710 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 10:36:53.097466    8710 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 10:36:53.147593    8710 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b to local cache
	I0415 10:36:53.147768    8710 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory
	I0415 10:36:53.147785    8710 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory, skipping pull
	I0415 10:36:53.147791    8710 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in cache, skipping pull
	I0415 10:36:53.147798    8710 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b as a tarball
	I0415 10:36:53.154159    8710 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 10:36:53.154180    8710 cache.go:56] Caching tarball of preloaded images
	I0415 10:36:53.154344    8710 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 10:36:53.176066    8710 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 10:36:53.176089    8710 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 10:36:53.254847    8710 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2fedab548578a1509c0f422889c3109c -> /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-422000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-422000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-422000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (14.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-667000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-667000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker : (14.32576486s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (14.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-667000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-667000: exit status 85 (295.844105ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-841000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT |                     |
	|         | -p download-only-841000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT | 15 Apr 24 10:36 PDT |
	| delete  | -p download-only-841000           | download-only-841000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT | 15 Apr 24 10:36 PDT |
	| start   | -o=json --download-only           | download-only-422000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT |                     |
	|         | -p download-only-422000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:36 PDT | 15 Apr 24 10:37 PDT |
	| delete  | -p download-only-422000           | download-only-422000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:37 PDT | 15 Apr 24 10:37 PDT |
	| start   | -o=json --download-only           | download-only-667000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 10:37 PDT |                     |
	|         | -p download-only-667000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 10:37:00
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 10:37:00.923845    8781 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:37:00.924516    8781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:37:00.924523    8781 out.go:304] Setting ErrFile to fd 2...
	I0415 10:37:00.924528    8781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:37:00.924971    8781 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 10:37:00.926668    8781 out.go:298] Setting JSON to true
	I0415 10:37:00.949010    8781 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2191,"bootTime":1713200429,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 10:37:00.949124    8781 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 10:37:00.971173    8781 out.go:97] [download-only-667000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 10:37:00.992600    8781 out.go:169] MINIKUBE_LOCATION=18634
	I0415 10:37:00.971349    8781 notify.go:220] Checking for updates...
	I0415 10:37:01.036058    8781 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 10:37:01.056634    8781 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 10:37:01.077717    8781 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:37:01.098774    8781 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	W0415 10:37:01.140937    8781 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 10:37:01.141390    8781 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:37:01.202542    8781 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 10:37:01.202698    8781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:37:01.300376    8781 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-04-15 17:37:01.291034872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 10:37:01.321747    8781 out.go:97] Using the docker driver based on user configuration
	I0415 10:37:01.321800    8781 start.go:297] selected driver: docker
	I0415 10:37:01.321817    8781 start.go:901] validating driver "docker" against <nil>
	I0415 10:37:01.322031    8781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:37:01.420305    8781 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:109 SystemTime:2024-04-15 17:37:01.410557186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 10:37:01.420530    8781 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 10:37:01.423393    8781 start_flags.go:393] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0415 10:37:01.423541    8781 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 10:37:01.444921    8781 out.go:169] Using Docker Desktop driver with root privileges
	I0415 10:37:01.466195    8781 cni.go:84] Creating CNI manager for ""
	I0415 10:37:01.466239    8781 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 10:37:01.466272    8781 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 10:37:01.466417    8781 start.go:340] cluster config:
	{Name:download-only-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:5877 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:37:01.488041    8781 out.go:97] Starting "download-only-667000" primary control-plane node in "download-only-667000" cluster
	I0415 10:37:01.488087    8781 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 10:37:01.509654    8781 out.go:97] Pulling base image v0.0.43-1713176859-18634 ...
	I0415 10:37:01.509706    8781 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 10:37:01.509790    8781 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local docker daemon
	I0415 10:37:01.559481    8781 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b to local cache
	I0415 10:37:01.559652    8781 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory
	I0415 10:37:01.559669    8781 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b in local cache directory, skipping pull
	I0415 10:37:01.559674    8781 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b exists in cache, skipping pull
	I0415 10:37:01.559681    8781 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b as a tarball
	I0415 10:37:01.569577    8781 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 10:37:01.569605    8781 cache.go:56] Caching tarball of preloaded images
	I0415 10:37:01.569831    8781 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 10:37:01.591580    8781 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 10:37:01.591592    8781 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 10:37:01.664580    8781 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:9834337eee074d8b5e25932a2917a549 -> /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 10:37:06.681899    8781 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 10:37:06.682083    8781 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 10:37:07.175112    8781 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on docker
	I0415 10:37:07.175343    8781 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/download-only-667000/config.json ...
	I0415 10:37:07.175367    8781 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/download-only-667000/config.json: {Name:mkeb03ae8600ac56fabe5bf37a47bfb8f6896204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 10:37:07.175692    8781 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 10:37:07.175890    8781 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18634-8183/.minikube/cache/darwin/amd64/v1.30.0-rc.2/kubectl
	
	
	* The control-plane node download-only-667000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-667000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-667000
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.84s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-907000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-907000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-907000
--- PASS: TestDownloadOnlyKic (1.84s)

                                                
                                    
x
+
TestBinaryMirror (1.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-152000 --alsologtostderr --binary-mirror http://127.0.0.1:52305 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-152000 --alsologtostderr --binary-mirror http://127.0.0.1:52305 --driver=docker : (1.009912933s)
helpers_test.go:175: Cleaning up "binary-mirror-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-152000
--- PASS: TestBinaryMirror (1.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-893000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-893000: exit status 85 (196.437252ms)

                                                
                                                
-- stdout --
	* Profile "addons-893000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-893000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-893000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-893000: exit status 85 (217.322932ms)

                                                
                                                
-- stdout --
	* Profile "addons-893000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-893000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (141.45s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-893000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-893000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m21.448469187s)
--- PASS: TestAddons/Setup (141.45s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cz4g6" [184c3820-00c2-4faf-9d4f-144ee6ccd2ee] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005234507s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-893000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-893000: (5.875716764s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.631027ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-j9nfx" [cf05fb94-88cc-4efc-b76b-7bcaeb86abce] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004430382s
addons_test.go:415: (dbg) Run:  kubectl --context addons-893000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-893000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.96s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.366128ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-nscmh" [783868fa-34ea-4aac-8104-f7e468ab930f] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003875882s
addons_test.go:473: (dbg) Run:  kubectl --context addons-893000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-893000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.117516222s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-893000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (76.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 15.723526ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-893000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-893000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e306b55c-caad-4e53-8172-dbe462ec166e] Pending
helpers_test.go:344: "task-pv-pod" [e306b55c-caad-4e53-8172-dbe462ec166e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e306b55c-caad-4e53-8172-dbe462ec166e] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005193573s
addons_test.go:584: (dbg) Run:  kubectl --context addons-893000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-893000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-893000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-893000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-893000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-893000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-893000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [08980eac-02f0-4781-8ddf-391ec0142103] Pending
helpers_test.go:344: "task-pv-pod-restore" [08980eac-02f0-4781-8ddf-391ec0142103] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [08980eac-02f0-4781-8ddf-391ec0142103] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.004392536s
addons_test.go:626: (dbg) Run:  kubectl --context addons-893000 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-893000 delete pod task-pv-pod-restore: (1.220402881s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-893000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-893000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-893000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-893000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.720671211s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-893000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (76.65s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-893000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-893000 --alsologtostderr -v=1: (1.300557994s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-zms7b" [83844087-2c43-4fa9-9ac1-07f8ca688b50] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-zms7b" [83844087-2c43-4fa9-9ac1-07f8ca688b50] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.006604887s
--- PASS: TestAddons/parallel/Headlamp (13.31s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-6v2zc" [4bc9d779-5d39-4a92-b99b-bb7cce4d5bc5] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005096992s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-893000
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-893000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-893000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-893000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7efd7e04-cfaf-4d42-8503-c3f94e20cb7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7efd7e04-cfaf-4d42-8503-c3f94e20cb7a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7efd7e04-cfaf-4d42-8503-c3f94e20cb7a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002781916s
addons_test.go:891: (dbg) Run:  kubectl --context addons-893000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-893000 ssh "cat /opt/local-path-provisioner/pvc-931e2b4c-8abe-4afb-8ac0-5b08c5602dd3_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-893000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-893000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-893000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-893000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.019435017s)
--- PASS: TestAddons/parallel/LocalPath (53.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fpd28" [a3548bb3-5be5-464b-a274-27db6478725c] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005268897s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-893000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-rnhbj" [3531469c-7ea9-4c00-959e-f265ffe08b60] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005162152s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-893000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-893000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-893000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-893000: (10.962242388s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-893000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-893000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-893000
--- PASS: TestAddons/StoppedEnableDisable (11.71s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.05s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.05s)

                                                
                                    
x
+
TestErrorSpam/setup (20.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-329000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-329000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 --driver=docker : (20.789350617s)
--- PASS: TestErrorSpam/setup (20.79s)

                                                
                                    
x
+
TestErrorSpam/start (2.14s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 start --dry-run
--- PASS: TestErrorSpam/start (2.14s)

                                                
                                    
x
+
TestErrorSpam/status (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 status
--- PASS: TestErrorSpam/status (1.24s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (2.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 stop: (1.575744869s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-329000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-329000 stop
--- PASS: TestErrorSpam/stop (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18634-8183/.minikube/files/etc/test/nested/copy/8640/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-007000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-007000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.98435333s)
--- PASS: TestFunctional/serial/StartWithProxy (37.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-007000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-007000 --alsologtostderr -v=8: (27.608512697s)
functional_test.go:659: soft start took 27.608975958s for "functional-007000" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-007000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 cache add registry.k8s.io/pause:3.1: (1.147560121s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 cache add registry.k8s.io/pause:3.3: (1.181618826s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 cache add registry.k8s.io/pause:latest: (1.100078093s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local3898616516/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 cache add minikube-local-cache-test:functional-007000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 cache add minikube-local-cache-test:functional-007000: (1.07520227s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 cache delete minikube-local-cache-test:functional-007000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-007000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (386.032249ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 kubectl -- --context functional-007000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-007000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-007000 get pods: (1.298091566s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.30s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-007000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-007000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.936465766s)
functional_test.go:757: restart took 41.936599868s for "functional-007000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-007000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 logs: (3.231909343s)
--- PASS: TestFunctional/serial/LogsCmd (3.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2268804208/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2268804208/001/logs.txt: (3.118114212s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-007000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-007000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-007000: exit status 115 (561.656617ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30403 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-007000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 config get cpus: exit status 14 (63.149923ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 config get cpus: exit status 14 (63.576483ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-007000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-007000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 10910: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-007000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-007000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (701.169381ms)

                                                
                                                
-- stdout --
	* [functional-007000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:45:26.555568   10848 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:45:26.556307   10848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:45:26.556315   10848 out.go:304] Setting ErrFile to fd 2...
	I0415 10:45:26.556322   10848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:45:26.556810   10848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 10:45:26.558478   10848 out.go:298] Setting JSON to false
	I0415 10:45:26.580527   10848 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2697,"bootTime":1713200429,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 10:45:26.580642   10848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 10:45:26.601801   10848 out.go:177] * [functional-007000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 10:45:26.643731   10848 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 10:45:26.643803   10848 notify.go:220] Checking for updates...
	I0415 10:45:26.685369   10848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 10:45:26.706627   10848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 10:45:26.748468   10848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:45:26.769587   10848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	I0415 10:45:26.843432   10848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 10:45:26.865358   10848 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 10:45:26.866136   10848 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:45:26.921967   10848 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 10:45:26.922122   10848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:45:27.021911   10848 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:116 SystemTime:2024-04-15 17:45:27.012349926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 10:45:27.064129   10848 out.go:177] * Using the docker driver based on existing profile
	I0415 10:45:27.085471   10848 start.go:297] selected driver: docker
	I0415 10:45:27.085502   10848 start.go:901] validating driver "docker" against &{Name:functional-007000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-007000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:45:27.085622   10848 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 10:45:27.111376   10848 out.go:177] 
	W0415 10:45:27.132198   10848 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0415 10:45:27.153605   10848 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-007000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-007000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-007000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (696.339745ms)

                                                
                                                
-- stdout --
	* [functional-007000] minikube v1.33.0-beta.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18634
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:45:25.853062   10830 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:45:25.853230   10830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:45:25.853236   10830 out.go:304] Setting ErrFile to fd 2...
	I0415 10:45:25.853239   10830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:45:25.853450   10830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 10:45:25.855095   10830 out.go:298] Setting JSON to false
	I0415 10:45:25.877789   10830 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2696,"bootTime":1713200429,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0415 10:45:25.877879   10830 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 10:45:25.899651   10830 out.go:177] * [functional-007000] minikube v1.33.0-beta.0 sur Darwin 14.4.1
	I0415 10:45:25.996280   10830 out.go:177]   - MINIKUBE_LOCATION=18634
	I0415 10:45:25.974301   10830 notify.go:220] Checking for updates...
	I0415 10:45:26.038194   10830 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
	I0415 10:45:26.059166   10830 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 10:45:26.080051   10830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 10:45:26.101094   10830 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube
	I0415 10:45:26.122253   10830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 10:45:26.143894   10830 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 10:45:26.144641   10830 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 10:45:26.219625   10830 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0415 10:45:26.219769   10830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 10:45:26.321717   10830 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:116 SystemTime:2024-04-15 17:45:26.311388348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0415 10:45:26.364055   10830 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0415 10:45:26.385021   10830 start.go:297] selected driver: docker
	I0415 10:45:26.385051   10830 start.go:901] validating driver "docker" against &{Name:functional-007000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713176859-18634@sha256:aa626f490dfc5e9a013f239555a8c57845d8eb915cd55dbd63f6a05070c2709b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-007000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 10:45:26.385189   10830 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 10:45:26.410911   10830 out.go:177] 
	W0415 10:45:26.431902   10830 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0415 10:45:26.452773   10830 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
E0415 10:44:42.880370    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
helpers_test.go:344: "storage-provisioner" [f6796fbd-99a4-4233-b7ef-2c1385ae6b99] Running
E0415 10:44:42.890495    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 10:44:42.910566    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 10:44:42.950697    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 10:44:43.030890    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 10:44:43.193127    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 10:44:43.513269    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 10:44:44.153781    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 10:44:45.434139    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00612251s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-007000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-007000 apply -f testdata/storage-provisioner/pvc.yaml
E0415 10:44:47.994695    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-007000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-007000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [56145714-44c6-43d1-a715-0970533d4c79] Pending
helpers_test.go:344: "sp-pod" [56145714-44c6-43d1-a715-0970533d4c79] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0415 10:44:53.114796    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [56145714-44c6-43d1-a715-0970533d4c79] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005183619s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-007000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-007000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-007000 delete -f testdata/storage-provisioner/pod.yaml: (1.001596386s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-007000 apply -f testdata/storage-provisioner/pod.yaml
E0415 10:45:03.356375    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b0baf6e8-84f2-4bfa-bfa9-7c8a17c82953] Pending
helpers_test.go:344: "sp-pod" [b0baf6e8-84f2-4bfa-bfa9-7c8a17c82953] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b0baf6e8-84f2-4bfa-bfa9-7c8a17c82953] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004172467s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-007000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh -n functional-007000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 cp functional-007000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd1880751199/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh -n functional-007000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh -n functional-007000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-007000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-s5rnw" [bf71e07c-60cd-441c-947f-5499cc662494] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-s5rnw" [bf71e07c-60cd-441c-947f-5499cc662494] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003437537s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-007000 exec mysql-859648c796-s5rnw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-007000 exec mysql-859648c796-s5rnw -- mysql -ppassword -e "show databases;": exit status 1 (122.163899ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-007000 exec mysql-859648c796-s5rnw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-007000 exec mysql-859648c796-s5rnw -- mysql -ppassword -e "show databases;": exit status 1 (129.953799ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-007000 exec mysql-859648c796-s5rnw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-007000 exec mysql-859648c796-s5rnw -- mysql -ppassword -e "show databases;": exit status 1 (111.223713ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-007000 exec mysql-859648c796-s5rnw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/8640/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo cat /etc/test/nested/copy/8640/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/8640.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo cat /etc/ssl/certs/8640.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/8640.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo cat /usr/share/ca-certificates/8640.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/86402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo cat /etc/ssl/certs/86402.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/86402.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo cat /usr/share/ca-certificates/86402.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-007000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 ssh "sudo systemctl is-active crio": exit status 1 (408.332194ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-007000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-007000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-007000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 10369: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-007000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-007000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-007000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5d6a02a1-78f7-484c-a9cd-d981fd7bd59d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0415 10:44:42.719808    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [5d6a02a1-78f7-484c-a9cd-d981fd7bd59d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003242249s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-007000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-007000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 10420: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-007000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-007000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-8fpx2" [fbaf9c47-6af6-4cc1-bdac-8f0fac104f83] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-8fpx2" [fbaf9c47-6af6-4cc1-bdac-8f0fac104f83] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004210423s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "459.392295ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "85.6223ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "458.258133ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "86.749807ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1769547017/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713203112157323000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1769547017/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713203112157323000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1769547017/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713203112157323000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1769547017/001/test-1713203112157323000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (370.319767ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 15 17:45 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 15 17:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 15 17:45 test-1713203112157323000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh cat /mount-9p/test-1713203112157323000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-007000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [002ad067-36c4-49e2-9a3b-79d0e16708cd] Pending
helpers_test.go:344: "busybox-mount" [002ad067-36c4-49e2-9a3b-79d0e16708cd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [002ad067-36c4-49e2-9a3b-79d0e16708cd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [002ad067-36c4-49e2-9a3b-79d0e16708cd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004881089s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-007000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port1769547017/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 service list -o json
functional_test.go:1490: Took "610.577668ms" to run "out/minikube-darwin-amd64 -p functional-007000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 service --namespace=default --https --url hello-node: signal: killed (15.004578105s)

                                                
                                                
-- stdout --
	https://127.0.0.1:53292

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:53292
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1934322339/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (368.987763ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1934322339/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 ssh "sudo umount -f /mount-9p": exit status 1 (359.657597ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-007000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1934322339/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4153485804/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4153485804/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4153485804/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T" /mount1: exit status 1 (452.087878ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T" /mount1: (1.074421325s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T" /mount2
E0415 10:45:23.837377    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-007000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4153485804/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4153485804/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-007000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4153485804/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 service hello-node --url --format={{.IP}}
2024/04/15 10:45:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 service hello-node --url --format={{.IP}}: signal: killed (15.002750691s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-007000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-007000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-007000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-007000 image ls --format short --alsologtostderr:
I0415 10:46:03.117130   11217 out.go:291] Setting OutFile to fd 1 ...
I0415 10:46:03.117420   11217 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:03.117426   11217 out.go:304] Setting ErrFile to fd 2...
I0415 10:46:03.117429   11217 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:03.117629   11217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
I0415 10:46:03.118230   11217 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:03.118336   11217 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:03.118716   11217 cli_runner.go:164] Run: docker container inspect functional-007000 --format={{.State.Status}}
I0415 10:46:03.169773   11217 ssh_runner.go:195] Run: systemctl --version
I0415 10:46:03.169838   11217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-007000
I0415 10:46:03.220423   11217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53034 SSHKeyPath:/Users/jenkins/minikube-integration/18634-8183/.minikube/machines/functional-007000/id_rsa Username:docker}
I0415 10:46:03.304381   11217 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-007000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.29.3           | a1d263b5dc5b0 | 82.4MB |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| docker.io/library/nginx                     | latest            | c613f16b66424 | 187MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-scheduler              | v1.29.3           | 8c390d98f50c0 | 59.6MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/google-containers/addon-resizer      | functional-007000 | ffd4cfbbe753e | 32.9MB |
| docker.io/localhost/my-image                | functional-007000 | e97b7b35dcf7a | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.29.3           | 39f995c9f1996 | 127MB  |
| registry.k8s.io/kube-controller-manager     | v1.29.3           | 6052a25da3f97 | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-007000 | 851452d1b0562 | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-007000 image ls --format table --alsologtostderr:
I0415 10:46:07.564318   11260 out.go:291] Setting OutFile to fd 1 ...
I0415 10:46:07.564512   11260 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:07.564517   11260 out.go:304] Setting ErrFile to fd 2...
I0415 10:46:07.564521   11260 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:07.564709   11260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
I0415 10:46:07.565337   11260 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:07.565434   11260 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:07.565823   11260 cli_runner.go:164] Run: docker container inspect functional-007000 --format={{.State.Status}}
I0415 10:46:07.617282   11260 ssh_runner.go:195] Run: systemctl --version
I0415 10:46:07.617347   11260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-007000
I0415 10:46:07.668575   11260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53034 SSHKeyPath:/Users/jenkins/minikube-integration/18634-8183/.minikube/machines/functional-007000/id_rsa Username:docker}
I0415 10:46:07.752119   11260 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-007000 image ls --format json --alsologtostderr:
[{"id":"c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"851452d1b0562e9a7bdf0151d86302e29a97431072a593024dc6b16abb6af954","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-007000"],"size":"30"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487ad
d79dc3903c4bf14b3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"122000000"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"82400000"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"59600000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-007000"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","
repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"127000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e97b7b35dcf7a956bcefe2205e84303f6d8f8e32f5ac5e771660e8b6fc8f7912","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-007000"],"size":"1240000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["
registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-007000 image ls --format json --alsologtostderr:
I0415 10:46:07.262667   11254 out.go:291] Setting OutFile to fd 1 ...
I0415 10:46:07.262861   11254 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:07.262867   11254 out.go:304] Setting ErrFile to fd 2...
I0415 10:46:07.262871   11254 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:07.263066   11254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
I0415 10:46:07.263670   11254 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:07.263768   11254 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:07.264157   11254 cli_runner.go:164] Run: docker container inspect functional-007000 --format={{.State.Status}}
I0415 10:46:07.316226   11254 ssh_runner.go:195] Run: systemctl --version
I0415 10:46:07.316291   11254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-007000
I0415 10:46:07.367656   11254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53034 SSHKeyPath:/Users/jenkins/minikube-integration/18634-8183/.minikube/machines/functional-007000/id_rsa Username:docker}
I0415 10:46:07.454135   11254 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-007000 image ls --format yaml --alsologtostderr:
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "82400000"
- id: c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "122000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-007000
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59600000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 851452d1b0562e9a7bdf0151d86302e29a97431072a593024dc6b16abb6af954
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-007000
size: "30"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "127000000"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-007000 image ls --format yaml --alsologtostderr:
I0415 10:46:03.414258   11223 out.go:291] Setting OutFile to fd 1 ...
I0415 10:46:03.414537   11223 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:03.414543   11223 out.go:304] Setting ErrFile to fd 2...
I0415 10:46:03.414547   11223 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:03.414730   11223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
I0415 10:46:03.416229   11223 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:03.416351   11223 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:03.416738   11223 cli_runner.go:164] Run: docker container inspect functional-007000 --format={{.State.Status}}
I0415 10:46:03.470983   11223 ssh_runner.go:195] Run: systemctl --version
I0415 10:46:03.471058   11223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-007000
I0415 10:46:03.522587   11223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53034 SSHKeyPath:/Users/jenkins/minikube-integration/18634-8183/.minikube/machines/functional-007000/id_rsa Username:docker}
I0415 10:46:03.607189   11223 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 ssh pgrep buildkitd: exit status 1 (388.232268ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image build -t localhost/my-image:functional-007000 testdata/build --alsologtostderr
E0415 10:46:04.798405    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 image build -t localhost/my-image:functional-007000 testdata/build --alsologtostderr: (2.847441288s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-007000 image build -t localhost/my-image:functional-007000 testdata/build --alsologtostderr:
I0415 10:46:04.119281   11239 out.go:291] Setting OutFile to fd 1 ...
I0415 10:46:04.119559   11239 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:04.119565   11239 out.go:304] Setting ErrFile to fd 2...
I0415 10:46:04.119568   11239 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 10:46:04.119754   11239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
I0415 10:46:04.120355   11239 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:04.121021   11239 config.go:182] Loaded profile config "functional-007000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 10:46:04.121441   11239 cli_runner.go:164] Run: docker container inspect functional-007000 --format={{.State.Status}}
I0415 10:46:04.176571   11239 ssh_runner.go:195] Run: systemctl --version
I0415 10:46:04.176644   11239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-007000
I0415 10:46:04.229879   11239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53034 SSHKeyPath:/Users/jenkins/minikube-integration/18634-8183/.minikube/machines/functional-007000/id_rsa Username:docker}
I0415 10:46:04.315991   11239 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2942786978.tar
I0415 10:46:04.316078   11239 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0415 10:46:04.347571   11239 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2942786978.tar
I0415 10:46:04.353189   11239 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2942786978.tar: stat -c "%s %y" /var/lib/minikube/build/build.2942786978.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2942786978.tar': No such file or directory
I0415 10:46:04.353244   11239 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2942786978.tar --> /var/lib/minikube/build/build.2942786978.tar (3072 bytes)
I0415 10:46:04.402219   11239 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2942786978
I0415 10:46:04.439038   11239 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2942786978 -xf /var/lib/minikube/build/build.2942786978.tar
I0415 10:46:04.456423   11239 docker.go:360] Building image: /var/lib/minikube/build/build.2942786978
I0415 10:46:04.456554   11239 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-007000 /var/lib/minikube/build/build.2942786978
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e97b7b35dcf7a956bcefe2205e84303f6d8f8e32f5ac5e771660e8b6fc8f7912 done
#8 naming to localhost/my-image:functional-007000 done
#8 DONE 0.0s
I0415 10:46:06.776786   11239 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-007000 /var/lib/minikube/build/build.2942786978: (2.320238586s)
I0415 10:46:06.776851   11239 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2942786978
I0415 10:46:06.834546   11239 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2942786978.tar
I0415 10:46:06.851906   11239 build_images.go:217] Built localhost/my-image:functional-007000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.2942786978.tar
I0415 10:46:06.851947   11239 build_images.go:133] succeeded building to: functional-007000
I0415 10:46:06.851956   11239 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.069038906s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-007000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image load --daemon gcr.io/google-containers/addon-resizer:functional-007000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 image load --daemon gcr.io/google-containers/addon-resizer:functional-007000 --alsologtostderr: (3.444023959s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-007000 service hello-node --url: signal: killed (15.00297584s)

                                                
                                                
-- stdout --
	http://127.0.0.1:53404

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:53404
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image load --daemon gcr.io/google-containers/addon-resizer:functional-007000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 image load --daemon gcr.io/google-containers/addon-resizer:functional-007000 --alsologtostderr: (2.037125303s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.621383039s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-007000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image load --daemon gcr.io/google-containers/addon-resizer:functional-007000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 image load --daemon gcr.io/google-containers/addon-resizer:functional-007000 --alsologtostderr: (2.944674507s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image save gcr.io/google-containers/addon-resizer:functional-007000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 image save gcr.io/google-containers/addon-resizer:functional-007000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.086873129s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image rm gcr.io/google-containers/addon-resizer:functional-007000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.684556513s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-007000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 image save --daemon gcr.io/google-containers/addon-resizer:functional-007000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-007000 image save --daemon gcr.io/google-containers/addon-resizer:functional-007000 --alsologtostderr: (1.111913851s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-007000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-007000 docker-env) && out/minikube-darwin-amd64 status -p functional-007000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-007000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-007000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-007000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-007000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-007000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (106.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-597000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0415 10:47:26.717577    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-597000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m45.432839438s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr: (1.146710855s)
--- PASS: TestMultiControlPlane/serial/StartCluster (106.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-597000 -- rollout status deployment/busybox: (2.678563493s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-g8kpm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-vl66p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-xgc58 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-g8kpm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-vl66p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-xgc58 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-g8kpm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-vl66p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-xgc58 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-g8kpm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-g8kpm -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-vl66p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-vl66p -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-xgc58 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-597000 -- exec busybox-7fdf7869d9-xgc58 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-597000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-597000 -v=7 --alsologtostderr: (18.636222083s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr: (1.395975113s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-597000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.12403599s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (24.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 status --output json -v=7 --alsologtostderr: (1.429597259s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp testdata/cp-test.txt ha-597000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile367409587/001/cp-test_ha-597000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000:/home/docker/cp-test.txt ha-597000-m02:/home/docker/cp-test_ha-597000_ha-597000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test_ha-597000_ha-597000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000:/home/docker/cp-test.txt ha-597000-m03:/home/docker/cp-test_ha-597000_ha-597000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test_ha-597000_ha-597000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000:/home/docker/cp-test.txt ha-597000-m04:/home/docker/cp-test_ha-597000_ha-597000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test_ha-597000_ha-597000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp testdata/cp-test.txt ha-597000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile367409587/001/cp-test_ha-597000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m02:/home/docker/cp-test.txt ha-597000:/home/docker/cp-test_ha-597000-m02_ha-597000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test_ha-597000-m02_ha-597000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m02:/home/docker/cp-test.txt ha-597000-m03:/home/docker/cp-test_ha-597000-m02_ha-597000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test_ha-597000-m02_ha-597000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m02:/home/docker/cp-test.txt ha-597000-m04:/home/docker/cp-test_ha-597000-m02_ha-597000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test_ha-597000-m02_ha-597000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp testdata/cp-test.txt ha-597000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile367409587/001/cp-test_ha-597000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m03:/home/docker/cp-test.txt ha-597000:/home/docker/cp-test_ha-597000-m03_ha-597000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test_ha-597000-m03_ha-597000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m03:/home/docker/cp-test.txt ha-597000-m02:/home/docker/cp-test_ha-597000-m03_ha-597000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test_ha-597000-m03_ha-597000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m03:/home/docker/cp-test.txt ha-597000-m04:/home/docker/cp-test_ha-597000-m03_ha-597000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test_ha-597000-m03_ha-597000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp testdata/cp-test.txt ha-597000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile367409587/001/cp-test_ha-597000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m04:/home/docker/cp-test.txt ha-597000:/home/docker/cp-test_ha-597000-m04_ha-597000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test_ha-597000-m04_ha-597000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m04:/home/docker/cp-test.txt ha-597000-m02:/home/docker/cp-test_ha-597000-m04_ha-597000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test_ha-597000-m04_ha-597000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 cp ha-597000-m04:/home/docker/cp-test.txt ha-597000-m03:/home/docker/cp-test_ha-597000-m04_ha-597000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test_ha-597000-m04_ha-597000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (24.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 node stop m02 -v=7 --alsologtostderr: (10.864630409s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr: exit status 7 (1.075455196s)

                                                
                                                
-- stdout --
	ha-597000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:49:21.342616   12512 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:49:21.342911   12512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:49:21.342917   12512 out.go:304] Setting ErrFile to fd 2...
	I0415 10:49:21.342921   12512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:49:21.343110   12512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 10:49:21.343292   12512 out.go:298] Setting JSON to false
	I0415 10:49:21.343316   12512 mustload.go:65] Loading cluster: ha-597000
	I0415 10:49:21.343350   12512 notify.go:220] Checking for updates...
	I0415 10:49:21.343626   12512 config.go:182] Loaded profile config "ha-597000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 10:49:21.343641   12512 status.go:255] checking status of ha-597000 ...
	I0415 10:49:21.344081   12512 cli_runner.go:164] Run: docker container inspect ha-597000 --format={{.State.Status}}
	I0415 10:49:21.395006   12512 status.go:330] ha-597000 host status = "Running" (err=<nil>)
	I0415 10:49:21.395042   12512 host.go:66] Checking if "ha-597000" exists ...
	I0415 10:49:21.395291   12512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-597000
	I0415 10:49:21.445265   12512 host.go:66] Checking if "ha-597000" exists ...
	I0415 10:49:21.445521   12512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 10:49:21.445579   12512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-597000
	I0415 10:49:21.496124   12512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53493 SSHKeyPath:/Users/jenkins/minikube-integration/18634-8183/.minikube/machines/ha-597000/id_rsa Username:docker}
	I0415 10:49:21.581473   12512 ssh_runner.go:195] Run: systemctl --version
	I0415 10:49:21.586405   12512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 10:49:21.603299   12512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-597000
	I0415 10:49:21.654428   12512 kubeconfig.go:125] found "ha-597000" server: "https://127.0.0.1:53492"
	I0415 10:49:21.654457   12512 api_server.go:166] Checking apiserver status ...
	I0415 10:49:21.654499   12512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 10:49:21.671483   12512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2203/cgroup
	W0415 10:49:21.687041   12512 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 10:49:21.687110   12512 ssh_runner.go:195] Run: ls
	I0415 10:49:21.691405   12512 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53492/healthz ...
	I0415 10:49:21.695736   12512 api_server.go:279] https://127.0.0.1:53492/healthz returned 200:
	ok
	I0415 10:49:21.695753   12512 status.go:422] ha-597000 apiserver status = Running (err=<nil>)
	I0415 10:49:21.695770   12512 status.go:257] ha-597000 status: &{Name:ha-597000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:49:21.695781   12512 status.go:255] checking status of ha-597000-m02 ...
	I0415 10:49:21.696021   12512 cli_runner.go:164] Run: docker container inspect ha-597000-m02 --format={{.State.Status}}
	I0415 10:49:21.746528   12512 status.go:330] ha-597000-m02 host status = "Stopped" (err=<nil>)
	I0415 10:49:21.746556   12512 status.go:343] host is not running, skipping remaining checks
	I0415 10:49:21.746569   12512 status.go:257] ha-597000-m02 status: &{Name:ha-597000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:49:21.746587   12512 status.go:255] checking status of ha-597000-m03 ...
	I0415 10:49:21.746887   12512 cli_runner.go:164] Run: docker container inspect ha-597000-m03 --format={{.State.Status}}
	I0415 10:49:21.799702   12512 status.go:330] ha-597000-m03 host status = "Running" (err=<nil>)
	I0415 10:49:21.799730   12512 host.go:66] Checking if "ha-597000-m03" exists ...
	I0415 10:49:21.799996   12512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-597000-m03
	I0415 10:49:21.853065   12512 host.go:66] Checking if "ha-597000-m03" exists ...
	I0415 10:49:21.853357   12512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 10:49:21.853414   12512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-597000-m03
	I0415 10:49:21.904136   12512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53600 SSHKeyPath:/Users/jenkins/minikube-integration/18634-8183/.minikube/machines/ha-597000-m03/id_rsa Username:docker}
	I0415 10:49:21.989625   12512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 10:49:22.006550   12512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-597000
	I0415 10:49:22.058229   12512 kubeconfig.go:125] found "ha-597000" server: "https://127.0.0.1:53492"
	I0415 10:49:22.058256   12512 api_server.go:166] Checking apiserver status ...
	I0415 10:49:22.058293   12512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 10:49:22.075318   12512 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2055/cgroup
	W0415 10:49:22.091390   12512 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2055/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 10:49:22.091460   12512 ssh_runner.go:195] Run: ls
	I0415 10:49:22.095869   12512 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53492/healthz ...
	I0415 10:49:22.100215   12512 api_server.go:279] https://127.0.0.1:53492/healthz returned 200:
	ok
	I0415 10:49:22.100227   12512 status.go:422] ha-597000-m03 apiserver status = Running (err=<nil>)
	I0415 10:49:22.100236   12512 status.go:257] ha-597000-m03 status: &{Name:ha-597000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:49:22.100247   12512 status.go:255] checking status of ha-597000-m04 ...
	I0415 10:49:22.100481   12512 cli_runner.go:164] Run: docker container inspect ha-597000-m04 --format={{.State.Status}}
	I0415 10:49:22.151418   12512 status.go:330] ha-597000-m04 host status = "Running" (err=<nil>)
	I0415 10:49:22.151443   12512 host.go:66] Checking if "ha-597000-m04" exists ...
	I0415 10:49:22.151712   12512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-597000-m04
	I0415 10:49:22.200949   12512 host.go:66] Checking if "ha-597000-m04" exists ...
	I0415 10:49:22.201209   12512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 10:49:22.201257   12512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-597000-m04
	I0415 10:49:22.250939   12512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53720 SSHKeyPath:/Users/jenkins/minikube-integration/18634-8183/.minikube/machines/ha-597000-m04/id_rsa Username:docker}
	I0415 10:49:22.336452   12512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 10:49:22.354011   12512 status.go:257] ha-597000-m04 status: &{Name:ha-597000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (130.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 node start m02 -v=7 --alsologtostderr
E0415 10:49:42.159470    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:42.164732    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:42.174814    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:42.195384    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:42.235551    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:42.317471    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:42.478323    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:42.715547    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 10:49:42.799612    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:43.439808    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:44.720610    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:47.281691    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:49:52.402676    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:50:02.643108    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:50:10.555709    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
E0415 10:50:23.124116    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:51:04.084369    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 node start m02 -v=7 --alsologtostderr: (2m9.47324146s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr: (1.397182425s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (130.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.133442599s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (188.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-597000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-597000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-597000 -v=7 --alsologtostderr: (34.143933667s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-597000 --wait=true -v=7 --alsologtostderr
E0415 10:52:26.041166    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:54:42.193162    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:54:42.749991    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-597000 --wait=true -v=7 --alsologtostderr: (2m34.389971774s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-597000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (188.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 node delete m03 -v=7 --alsologtostderr: (10.548757051s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr: (1.020746438s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 stop -v=7 --alsologtostderr
E0415 10:55:09.880766    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 stop -v=7 --alsologtostderr: (32.593412867s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr: exit status 7 (213.82535ms)

                                                
                                                
-- stdout --
	ha-597000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 10:55:29.114884   13178 out.go:291] Setting OutFile to fd 1 ...
	I0415 10:55:29.115172   13178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:55:29.115178   13178 out.go:304] Setting ErrFile to fd 2...
	I0415 10:55:29.115181   13178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 10:55:29.115359   13178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18634-8183/.minikube/bin
	I0415 10:55:29.115531   13178 out.go:298] Setting JSON to false
	I0415 10:55:29.115554   13178 mustload.go:65] Loading cluster: ha-597000
	I0415 10:55:29.115586   13178 notify.go:220] Checking for updates...
	I0415 10:55:29.115851   13178 config.go:182] Loaded profile config "ha-597000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 10:55:29.115865   13178 status.go:255] checking status of ha-597000 ...
	I0415 10:55:29.116243   13178 cli_runner.go:164] Run: docker container inspect ha-597000 --format={{.State.Status}}
	I0415 10:55:29.166549   13178 status.go:330] ha-597000 host status = "Stopped" (err=<nil>)
	I0415 10:55:29.166570   13178 status.go:343] host is not running, skipping remaining checks
	I0415 10:55:29.166577   13178 status.go:257] ha-597000 status: &{Name:ha-597000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:55:29.166595   13178 status.go:255] checking status of ha-597000-m02 ...
	I0415 10:55:29.166825   13178 cli_runner.go:164] Run: docker container inspect ha-597000-m02 --format={{.State.Status}}
	I0415 10:55:29.216547   13178 status.go:330] ha-597000-m02 host status = "Stopped" (err=<nil>)
	I0415 10:55:29.216584   13178 status.go:343] host is not running, skipping remaining checks
	I0415 10:55:29.216595   13178 status.go:257] ha-597000-m02 status: &{Name:ha-597000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 10:55:29.216616   13178 status.go:255] checking status of ha-597000-m04 ...
	I0415 10:55:29.216893   13178 cli_runner.go:164] Run: docker container inspect ha-597000-m04 --format={{.State.Status}}
	I0415 10:55:29.266084   13178 status.go:330] ha-597000-m04 host status = "Stopped" (err=<nil>)
	I0415 10:55:29.266122   13178 status.go:343] host is not running, skipping remaining checks
	I0415 10:55:29.266132   13178 status.go:257] ha-597000-m04 status: &{Name:ha-597000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (117.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-597000 --wait=true -v=7 --alsologtostderr --driver=docker 
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-597000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m56.10312103s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr: (1.078584207s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (117.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-597000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-597000 --control-plane -v=7 --alsologtostderr: (36.592523416s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-597000 status -v=7 --alsologtostderr: (1.394967576s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.18741053s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.19s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-750000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-750000 --driver=docker : (21.317863895s)
--- PASS: TestImageBuild/serial/Setup (21.32s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-750000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-750000: (1.730809484s)
--- PASS: TestImageBuild/serial/NormalBuild (1.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-750000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-750000: (1.029630526s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-750000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-750000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-578000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0415 10:59:42.191717    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 10:59:42.747622    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-578000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m15.493938526s)
--- PASS: TestJSONOutput/start/Command (75.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-578000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-578000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-578000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-578000 --output=json --user=testUser: (10.713249974s)
--- PASS: TestJSONOutput/stop/Command (10.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-825000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-825000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (392.007576ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"251467bb-ff5b-42cf-b494-8156e0dd24c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-825000] minikube v1.33.0-beta.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"399f57a2-08a3-4117-a330-b9d03b2b635e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18634"}}
	{"specversion":"1.0","id":"a1d39a6c-6538-4def-984a-ab13d8c506b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig"}}
	{"specversion":"1.0","id":"c0096917-87f3-4242-91d9-7bde213ee016","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7221449a-fec2-40e9-be5e-0b051bfcab94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b47cf82-8d86-4c68-bc07-6ee0b9c4b3ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18634-8183/.minikube"}}
	{"specversion":"1.0","id":"709cf0f6-bd44-4c34-9346-871984e29394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c1da2758-cd6d-4bf9-8358-17257a81f545","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-825000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-825000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-574000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-574000 --network=: (21.216382694s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-574000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-574000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-574000: (2.408251647s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.68s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-420000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-420000 --network=bridge: (22.039509463s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-420000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-420000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-420000: (2.237490412s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.33s)

                                                
                                    
x
+
TestKicExistingNetwork (22.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-536000 --network=existing-network
E0415 11:01:05.947865    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-536000 --network=existing-network: (20.348775903s)
helpers_test.go:175: Cleaning up "existing-network-536000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-536000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-536000: (2.213396037s)
--- PASS: TestKicExistingNetwork (22.95s)

                                                
                                    
x
+
TestKicCustomSubnet (22.68s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-501000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-501000 --subnet=192.168.60.0/24: (20.27284041s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-501000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-501000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-501000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-501000: (2.350832484s)
--- PASS: TestKicCustomSubnet (22.68s)

                                                
                                    
x
+
TestKicStaticIP (24.81s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-616000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-616000 --static-ip=192.168.200.200: (22.175744705s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-616000 ip
helpers_test.go:175: Cleaning up "static-ip-616000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-616000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-616000: (2.395211354s)
--- PASS: TestKicStaticIP (24.81s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (48.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-538000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-538000 --driver=docker : (20.893420974s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-541000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-541000 --driver=docker : (20.959316927s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-538000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-541000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-541000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-541000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-541000: (2.419380746s)
helpers_test.go:175: Cleaning up "first-538000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-538000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-538000: (2.393006659s)
--- PASS: TestMinikubeProfile (48.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-006000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-006000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.836791747s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-006000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-019000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-019000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.661349451s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.66s)

                                                
                                    
x
+
TestPreload (111.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-435000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0415 11:49:42.249563    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/functional-007000/client.crt: no such file or directory
E0415 11:49:42.806425    8640 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18634-8183/.minikube/profiles/addons-893000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-435000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m14.684141754s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-435000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-435000 image pull gcr.io/k8s-minikube/busybox: (1.571141139s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-435000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-435000: (10.923084646s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-435000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-435000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (21.342583908s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-435000 image list
helpers_test.go:175: Cleaning up "test-preload-435000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-435000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-435000: (2.488545341s)
--- PASS: TestPreload (111.33s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (11.76s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin
- MINIKUBE_LOCATION=18634
- KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3726772283/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3726772283/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3726772283/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3726772283/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (11.76s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin
- MINIKUBE_LOCATION=18634
- KUBECONFIG=/Users/jenkins/minikube-integration/18634-8183/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1858824512/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1858824512/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1858824512/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1858824512/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.46s)

                                                
                                    

Test skip (19/211)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 13.704731ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-78jdm" [15095fb4-dffa-4f3c-9373-8c67eef9b1aa] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006168135s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qrhs4" [b02a8017-4275-4973-90a2-90a8b1e98e42] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004721616s
addons_test.go:340: (dbg) Run:  kubectl --context addons-893000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-893000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-893000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.726607349s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-893000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-893000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-893000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bf54a429-a408-4f69-97d6-e2972014b092] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bf54a429-a408-4f69-97d6-e2972014b092] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004978397s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-893000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.83s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-007000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-007000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-tq97p" [91cb2a64-56c6-46ae-a855-6c0eea6066f9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-tq97p" [91cb2a64-56c6-46ae-a855-6c0eea6066f9] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004991275s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (12.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard