Test Report: Docker_macOS 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-18:35410
                    
                

Test fail (22/217)

x
+
TestOffline (754.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-679000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-679000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m33.63498333s)

                                                
                                                
-- stdout --
	* [offline-docker-679000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-679000" primary control-plane node in "offline-docker-679000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-679000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:54:44.783343   11501 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:54:44.783533   11501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:54:44.783538   11501 out.go:304] Setting ErrFile to fd 2...
	I0718 21:54:44.783542   11501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:54:44.783723   11501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:54:44.785275   11501 out.go:298] Setting JSON to false
	I0718 21:54:44.809357   11501 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6857,"bootTime":1721358027,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 21:54:44.809479   11501 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:54:44.831137   11501 out.go:177] * [offline-docker-679000] minikube v1.33.1 on Darwin 14.5
	I0718 21:54:44.872893   11501 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:54:44.872923   11501 notify.go:220] Checking for updates...
	I0718 21:54:44.914974   11501 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 21:54:44.935898   11501 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 21:54:44.956985   11501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:54:44.977929   11501 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	I0718 21:54:44.998936   11501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:54:45.020319   11501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:54:45.064265   11501 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 21:54:45.064445   11501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:54:45.144907   11501 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-19 04:54:45.135623483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:54:45.187039   11501 out.go:177] * Using the docker driver based on user configuration
	I0718 21:54:45.209250   11501 start.go:297] selected driver: docker
	I0718 21:54:45.209274   11501 start.go:901] validating driver "docker" against <nil>
	I0718 21:54:45.209294   11501 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:54:45.213809   11501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:54:45.305220   11501 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-19 04:54:45.296575038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:54:45.305411   11501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:54:45.305597   11501 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:54:45.326951   11501 out.go:177] * Using Docker Desktop driver with root privileges
	I0718 21:54:45.347988   11501 cni.go:84] Creating CNI manager for ""
	I0718 21:54:45.348011   11501 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:54:45.348017   11501 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:54:45.348086   11501 start.go:340] cluster config:
	{Name:offline-docker-679000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-679000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:54:45.369348   11501 out.go:177] * Starting "offline-docker-679000" primary control-plane node in "offline-docker-679000" cluster
	I0718 21:54:45.412262   11501 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 21:54:45.454128   11501 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0718 21:54:45.496345   11501 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:54:45.496401   11501 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 21:54:45.496420   11501 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 21:54:45.496452   11501 cache.go:56] Caching tarball of preloaded images
	I0718 21:54:45.496689   11501 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:54:45.496710   11501 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:54:45.498156   11501 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/offline-docker-679000/config.json ...
	I0718 21:54:45.498281   11501 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/offline-docker-679000/config.json: {Name:mk66c471f00d7bf9f1c4e3bf6e4f8adc68f85335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0718 21:54:45.531237   11501 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0718 21:54:45.531250   11501 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 21:54:45.531406   11501 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 21:54:45.531424   11501 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0718 21:54:45.531430   11501 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0718 21:54:45.531441   11501 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 21:54:45.531446   11501 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0718 21:54:45.750052   11501 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0718 21:54:45.750102   11501 cache.go:194] Successfully downloaded all kic artifacts
	I0718 21:54:45.750149   11501 start.go:360] acquireMachinesLock for offline-docker-679000: {Name:mk9fb5868e7e31378bbb16fe79b1074eafed10e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:54:45.750322   11501 start.go:364] duration metric: took 161.522µs to acquireMachinesLock for "offline-docker-679000"
	I0718 21:54:45.750350   11501 start.go:93] Provisioning new machine with config: &{Name:offline-docker-679000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-679000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:54:45.750415   11501 start.go:125] createHost starting for "" (driver="docker")
	I0718 21:54:45.794001   11501 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0718 21:54:45.794204   11501 start.go:159] libmachine.API.Create for "offline-docker-679000" (driver="docker")
	I0718 21:54:45.794236   11501 client.go:168] LocalClient.Create starting
	I0718 21:54:45.794325   11501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 21:54:45.794377   11501 main.go:141] libmachine: Decoding PEM data...
	I0718 21:54:45.794402   11501 main.go:141] libmachine: Parsing certificate...
	I0718 21:54:45.794471   11501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 21:54:45.794510   11501 main.go:141] libmachine: Decoding PEM data...
	I0718 21:54:45.794524   11501 main.go:141] libmachine: Parsing certificate...
	I0718 21:54:45.794940   11501 cli_runner.go:164] Run: docker network inspect offline-docker-679000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 21:54:45.859300   11501 cli_runner.go:211] docker network inspect offline-docker-679000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 21:54:45.859410   11501 network_create.go:284] running [docker network inspect offline-docker-679000] to gather additional debugging logs...
	I0718 21:54:45.859429   11501 cli_runner.go:164] Run: docker network inspect offline-docker-679000
	W0718 21:54:45.883766   11501 cli_runner.go:211] docker network inspect offline-docker-679000 returned with exit code 1
	I0718 21:54:45.883805   11501 network_create.go:287] error running [docker network inspect offline-docker-679000]: docker network inspect offline-docker-679000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-679000 not found
	I0718 21:54:45.883820   11501 network_create.go:289] output of [docker network inspect offline-docker-679000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-679000 not found
	
	** /stderr **
	I0718 21:54:45.883958   11501 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:54:45.903348   11501 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:54:45.904954   11501 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:54:45.905297   11501 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001604f50}
	I0718 21:54:45.905315   11501 network_create.go:124] attempt to create docker network offline-docker-679000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0718 21:54:45.905390   11501 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-679000 offline-docker-679000
	I0718 21:54:46.051741   11501 network_create.go:108] docker network offline-docker-679000 192.168.67.0/24 created
	I0718 21:54:46.051785   11501 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-679000" container
	I0718 21:54:46.051934   11501 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 21:54:46.072194   11501 cli_runner.go:164] Run: docker volume create offline-docker-679000 --label name.minikube.sigs.k8s.io=offline-docker-679000 --label created_by.minikube.sigs.k8s.io=true
	I0718 21:54:46.091856   11501 oci.go:103] Successfully created a docker volume offline-docker-679000
	I0718 21:54:46.091975   11501 cli_runner.go:164] Run: docker run --rm --name offline-docker-679000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-679000 --entrypoint /usr/bin/test -v offline-docker-679000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 21:54:46.888998   11501 oci.go:107] Successfully prepared a docker volume offline-docker-679000
	I0718 21:54:46.889079   11501 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:54:46.889097   11501 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 21:54:46.889209   11501 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-679000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 22:00:45.914029   11501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:00:45.914171   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:00:45.935047   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:00:45.935172   11501 retry.go:31] will retry after 307.625473ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:46.243680   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:00:46.262898   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:00:46.263012   11501 retry.go:31] will retry after 442.323742ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:46.707127   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:00:46.726694   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:00:46.726786   11501 retry.go:31] will retry after 361.848112ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:47.089406   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:00:47.108960   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	W0718 22:00:47.109058   11501 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	
	W0718 22:00:47.109076   11501 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:47.109144   11501 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:00:47.109200   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:00:47.127570   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:00:47.127660   11501 retry.go:31] will retry after 322.181015ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:47.452245   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:00:47.471762   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:00:47.471861   11501 retry.go:31] will retry after 479.934623ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:47.952252   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:00:47.971413   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:00:47.971505   11501 retry.go:31] will retry after 308.08964ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:48.281979   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:00:48.302465   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:00:48.302563   11501 retry.go:31] will retry after 537.084182ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:48.842066   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:00:48.862576   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	W0718 22:00:48.862672   11501 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	
	W0718 22:00:48.862690   11501 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:48.862706   11501 start.go:128] duration metric: took 6m2.994903984s to createHost
	I0718 22:00:48.862713   11501 start.go:83] releasing machines lock for "offline-docker-679000", held for 6m2.995008s
	W0718 22:00:48.862729   11501 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0718 22:00:48.863182   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:48.881526   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:00:48.881585   11501 delete.go:82] Unable to get host status for offline-docker-679000, assuming it has already been deleted: state: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	W0718 22:00:48.881673   11501 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0718 22:00:48.881682   11501 start.go:729] Will try again in 5 seconds ...
	I0718 22:00:53.883315   11501 start.go:360] acquireMachinesLock for offline-docker-679000: {Name:mk9fb5868e7e31378bbb16fe79b1074eafed10e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 22:00:53.884460   11501 start.go:364] duration metric: took 351.908µs to acquireMachinesLock for "offline-docker-679000"
	I0718 22:00:53.884521   11501 start.go:96] Skipping create...Using existing machine configuration
	I0718 22:00:53.884541   11501 fix.go:54] fixHost starting: 
	I0718 22:00:53.885034   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:53.903633   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:00:53.903680   11501 fix.go:112] recreateIfNeeded on offline-docker-679000: state= err=unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:53.903697   11501 fix.go:117] machineExists: false. err=machine does not exist
	I0718 22:00:53.932287   11501 out.go:177] * docker "offline-docker-679000" container is missing, will recreate.
	I0718 22:00:53.952198   11501 delete.go:124] DEMOLISHING offline-docker-679000 ...
	I0718 22:00:53.952424   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:53.971398   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	W0718 22:00:53.971465   11501 stop.go:83] unable to get state: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:53.971483   11501 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:53.971873   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:53.989308   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:00:53.989369   11501 delete.go:82] Unable to get host status for offline-docker-679000, assuming it has already been deleted: state: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:53.989465   11501 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-679000
	W0718 22:00:54.006557   11501 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-679000 returned with exit code 1
	I0718 22:00:54.006594   11501 kic.go:371] could not find the container offline-docker-679000 to remove it. will try anyways
	I0718 22:00:54.006680   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:54.024104   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	W0718 22:00:54.024150   11501 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:54.024243   11501 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-679000 /bin/bash -c "sudo init 0"
	W0718 22:00:54.041405   11501 cli_runner.go:211] docker exec --privileged -t offline-docker-679000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 22:00:54.041445   11501 oci.go:650] error shutdown offline-docker-679000: docker exec --privileged -t offline-docker-679000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:55.042470   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:55.062720   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:00:55.062777   11501 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:55.062791   11501 oci.go:664] temporary error: container offline-docker-679000 status is  but expect it to be exited
	I0718 22:00:55.062821   11501 retry.go:31] will retry after 479.640784ms: couldn't verify container is exited. %v: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:55.544776   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:55.565282   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:00:55.565329   11501 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:55.565341   11501 oci.go:664] temporary error: container offline-docker-679000 status is  but expect it to be exited
	I0718 22:00:55.565369   11501 retry.go:31] will retry after 383.81388ms: couldn't verify container is exited. %v: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:55.951532   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:55.971422   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:00:55.971468   11501 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:55.971480   11501 oci.go:664] temporary error: container offline-docker-679000 status is  but expect it to be exited
	I0718 22:00:55.971505   11501 retry.go:31] will retry after 1.307707853s: couldn't verify container is exited. %v: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:57.279530   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:57.299030   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:00:57.299088   11501 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:57.299102   11501 oci.go:664] temporary error: container offline-docker-679000 status is  but expect it to be exited
	I0718 22:00:57.299127   11501 retry.go:31] will retry after 2.236539415s: couldn't verify container is exited. %v: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:59.536921   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:00:59.556479   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:00:59.556522   11501 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:00:59.556532   11501 oci.go:664] temporary error: container offline-docker-679000 status is  but expect it to be exited
	I0718 22:00:59.556559   11501 retry.go:31] will retry after 1.791788297s: couldn't verify container is exited. %v: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:01:01.350262   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:01:01.370145   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:01.370193   11501 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:01:01.370204   11501 oci.go:664] temporary error: container offline-docker-679000 status is  but expect it to be exited
	I0718 22:01:01.370230   11501 retry.go:31] will retry after 4.801194457s: couldn't verify container is exited. %v: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:01:06.171942   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:01:06.191302   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:06.191356   11501 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:01:06.191366   11501 oci.go:664] temporary error: container offline-docker-679000 status is  but expect it to be exited
	I0718 22:01:06.191388   11501 retry.go:31] will retry after 4.728541246s: couldn't verify container is exited. %v: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:01:10.920689   11501 cli_runner.go:164] Run: docker container inspect offline-docker-679000 --format={{.State.Status}}
	W0718 22:01:10.940912   11501 cli_runner.go:211] docker container inspect offline-docker-679000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:10.940963   11501 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:01:10.940972   11501 oci.go:664] temporary error: container offline-docker-679000 status is  but expect it to be exited
	I0718 22:01:10.941001   11501 oci.go:88] couldn't shut down offline-docker-679000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	 
	I0718 22:01:10.941091   11501 cli_runner.go:164] Run: docker rm -f -v offline-docker-679000
	I0718 22:01:10.959587   11501 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-679000
	W0718 22:01:10.976844   11501 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-679000 returned with exit code 1
	I0718 22:01:10.976964   11501 cli_runner.go:164] Run: docker network inspect offline-docker-679000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:01:10.994322   11501 cli_runner.go:164] Run: docker network rm offline-docker-679000
	I0718 22:01:11.077517   11501 fix.go:124] Sleeping 1 second for extra luck!
	I0718 22:01:12.078342   11501 start.go:125] createHost starting for "" (driver="docker")
	I0718 22:01:12.101569   11501 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0718 22:01:12.101755   11501 start.go:159] libmachine.API.Create for "offline-docker-679000" (driver="docker")
	I0718 22:01:12.101784   11501 client.go:168] LocalClient.Create starting
	I0718 22:01:12.102039   11501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 22:01:12.102140   11501 main.go:141] libmachine: Decoding PEM data...
	I0718 22:01:12.102168   11501 main.go:141] libmachine: Parsing certificate...
	I0718 22:01:12.102258   11501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 22:01:12.102342   11501 main.go:141] libmachine: Decoding PEM data...
	I0718 22:01:12.102357   11501 main.go:141] libmachine: Parsing certificate...
	I0718 22:01:12.103263   11501 cli_runner.go:164] Run: docker network inspect offline-docker-679000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 22:01:12.122275   11501 cli_runner.go:211] docker network inspect offline-docker-679000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 22:01:12.122375   11501 network_create.go:284] running [docker network inspect offline-docker-679000] to gather additional debugging logs...
	I0718 22:01:12.122394   11501 cli_runner.go:164] Run: docker network inspect offline-docker-679000
	W0718 22:01:12.139563   11501 cli_runner.go:211] docker network inspect offline-docker-679000 returned with exit code 1
	I0718 22:01:12.139591   11501 network_create.go:287] error running [docker network inspect offline-docker-679000]: docker network inspect offline-docker-679000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-679000 not found
	I0718 22:01:12.139601   11501 network_create.go:289] output of [docker network inspect offline-docker-679000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-679000 not found
	
	** /stderr **
	I0718 22:01:12.139748   11501 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:01:12.159230   11501 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:12.160793   11501 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:12.162371   11501 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:12.164111   11501 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:12.165678   11501 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:12.166198   11501 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001605630}
	I0718 22:01:12.166217   11501 network_create.go:124] attempt to create docker network offline-docker-679000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0718 22:01:12.166306   11501 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-679000 offline-docker-679000
	I0718 22:01:12.230175   11501 network_create.go:108] docker network offline-docker-679000 192.168.94.0/24 created
	I0718 22:01:12.230213   11501 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-679000" container
	I0718 22:01:12.230328   11501 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 22:01:12.249485   11501 cli_runner.go:164] Run: docker volume create offline-docker-679000 --label name.minikube.sigs.k8s.io=offline-docker-679000 --label created_by.minikube.sigs.k8s.io=true
	I0718 22:01:12.289369   11501 oci.go:103] Successfully created a docker volume offline-docker-679000
	I0718 22:01:12.289489   11501 cli_runner.go:164] Run: docker run --rm --name offline-docker-679000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-679000 --entrypoint /usr/bin/test -v offline-docker-679000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 22:01:12.541254   11501 oci.go:107] Successfully prepared a docker volume offline-docker-679000
	I0718 22:01:12.541297   11501 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 22:01:12.541311   11501 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 22:01:12.541423   11501 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-679000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 22:07:12.101500   11501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:07:12.101626   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:12.121111   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:12.121222   11501 retry.go:31] will retry after 349.540406ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:12.473112   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:12.492101   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:12.492220   11501 retry.go:31] will retry after 460.839923ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:12.954297   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:12.974215   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:12.974325   11501 retry.go:31] will retry after 468.014994ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:13.444684   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:13.463347   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:13.463445   11501 retry.go:31] will retry after 601.424028ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:14.066855   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:14.085448   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	W0718 22:07:14.085567   11501 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	
	W0718 22:07:14.085589   11501 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:14.085654   11501 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:07:14.085714   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:14.102971   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:14.103072   11501 retry.go:31] will retry after 154.959614ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:14.258563   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:14.277484   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:14.277589   11501 retry.go:31] will retry after 477.166394ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:14.755195   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:14.773942   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:14.774041   11501 retry.go:31] will retry after 395.967108ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:15.170594   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:15.190382   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	W0718 22:07:15.190494   11501 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	
	W0718 22:07:15.190516   11501 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:15.190527   11501 start.go:128] duration metric: took 6m3.114732837s to createHost
	I0718 22:07:15.190593   11501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:07:15.190653   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:15.207998   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:15.208090   11501 retry.go:31] will retry after 145.214337ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:15.354705   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:15.373547   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:15.373644   11501 retry.go:31] will retry after 373.154944ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:15.747236   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:15.766384   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:15.766478   11501 retry.go:31] will retry after 831.319553ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:16.598173   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:16.618096   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	W0718 22:07:16.618199   11501 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	
	W0718 22:07:16.618216   11501 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:16.618268   11501 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:07:16.618328   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:16.636519   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:16.636621   11501 retry.go:31] will retry after 260.969556ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:16.897831   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:16.916334   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:16.916439   11501 retry.go:31] will retry after 467.152216ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:17.385754   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:17.405647   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:17.405754   11501 retry.go:31] will retry after 334.744115ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:17.740993   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:17.759961   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	I0718 22:07:17.760058   11501 retry.go:31] will retry after 548.295751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:18.308901   11501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000
	W0718 22:07:18.329185   11501 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000 returned with exit code 1
	W0718 22:07:18.329291   11501 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	
	W0718 22:07:18.329311   11501 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-679000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-679000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000
	I0718 22:07:18.329324   11501 fix.go:56] duration metric: took 6m24.447603564s for fixHost
	I0718 22:07:18.329332   11501 start.go:83] releasing machines lock for "offline-docker-679000", held for 6m24.447651789s
	W0718 22:07:18.329404   11501 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-679000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-679000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0718 22:07:18.371014   11501 out.go:177] 
	W0718 22:07:18.393117   11501 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0718 22:07:18.393169   11501 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0718 22:07:18.393200   11501 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0718 22:07:18.414818   11501 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-679000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-07-18 22:07:18.489714 -0700 PDT m=+6140.052356639
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-679000
helpers_test.go:235: (dbg) docker inspect offline-docker-679000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-679000",
	        "Id": "cb3a7a8b38eb11eec5eaad8026b27378bc591496e58d20b050e4fbf2639aa887",
	        "Created": "2024-07-19T05:01:12.182564555Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-679000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-679000 -n offline-docker-679000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-679000 -n offline-docker-679000: exit status 7 (73.344807ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 22:07:18.584249   12262 status.go:249] status error: host: state: unknown state "offline-docker-679000": docker container inspect offline-docker-679000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-679000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-679000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-679000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-679000
--- FAIL: TestOffline (754.19s)

                                                
                                    
x
+
TestCertOptions (7201.651s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-451000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0718 22:20:27.525537    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 22:20:44.471874    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 22:24:32.844075    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (5m8s)
	TestCertOptions (4m35s)
	TestNetworkPlugins (30m14s)

                                                
                                                
goroutine 2586 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0000be1a0, 0xc0006cfbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0008c0060, {0x12f90ae0, 0x2a, 0x2a}, {0xea68825?, 0x105a0f2b?, 0x12fb3aa0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00063e320)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00063e320)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00069fa80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2527 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000205680, 0xc001954300)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 677
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2251 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013fe340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013fe340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013fe340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013fe340, 0xc000744500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 13 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 12
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 1092 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00181a180, 0xc001564ae0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1091
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 194 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000beac00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 181
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 195 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008d3480, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 181
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2266 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00130c4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00130c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc00130c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc00130c4e0, 0x11bffee0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2583 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x5a8b3f38, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0019ee900?, 0xc0013cca8d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0019ee900, {0xc0013cca8d, 0x573, 0x573})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0019b6108, {0xc0013cca8d?, 0xc000685180?, 0x223?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001d227e0, {0x11c0aad8, 0xc0008be420})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x11c0ac18, 0xc001d227e0}, {0x11c0aad8, 0xc0008be420}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001320e78?, {0x11c0ac18, 0xc001d227e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x12f522f0?, {0x11c0ac18?, 0xc001d227e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x11c0ac18, 0xc001d227e0}, {0x11c0ab98, 0xc0019b6108}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000524e40?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 676
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2255 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013feb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013feb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013feb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013feb60, 0xc000744780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 676 [syscall, 4 minutes]:
syscall.syscall6(0xc001d23f80?, 0x1000000000010?, 0x10000000019?, 0x5a3dcfd8?, 0x90?, 0x138d4108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000bfb8a0?, 0xe9a90c5?, 0x90?, 0x11b6c420?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xead99e5?, 0xc000bfb8d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc00152c4b0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000003380)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000003380)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0006511e0, 0xc000003380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0006511e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0006511e0, 0x11bffdd8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2249 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0013fe000, 0xc0013c4060)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2183
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2253 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013fe820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013fe820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013fe820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013fe820, 0xc000744680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2268 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00130dba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00130dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00130dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc00130dba0, 0x11bffe80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 197 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008d3450, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x116f5440?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000beaae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008d3480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000496870, {0x11c0c0c0, 0xc000288630}, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000496870, 0x3b9aca00, 0x0, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 195
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 198 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x11c2fdc0, 0xc0000662a0}, 0xc0008aaf50, 0xc0013e2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x11c2fdc0, 0xc0000662a0}, 0xd?, 0xc0008aaf50, 0xc0008aaf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x11c2fdc0?, 0xc0000662a0?}, 0xc0013feea0?, 0xeadc6a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xed85aa5?, 0xc00059e000?, 0xc000111b80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 195
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 199 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 198
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1137 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00138af00, 0xc00188e4e0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1136
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1864 [syscall, 95 minutes]:
syscall.syscall(0x0?, 0xc000c66468?, 0xc0008a87b0?, 0xeb22c95?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc001d22300?, 0x1?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1792
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2526 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x5a8b3c50, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001a1c720?, 0xc000890e00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001a1c720, {0xc000890e00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0008be440, {0xc000890e00?, 0xc001527180?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001a3a690, {0x11c0aad8, 0xc0019b60c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x11c0ac18, 0xc001a3a690}, {0x11c0aad8, 0xc0019b60c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00131fe98?, {0x11c0ac18, 0xc001a3a690})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x12f522f0?, {0x11c0ac18?, 0xc001a3a690?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x11c0ac18, 0xc001a3a690}, {0x11c0ab98, 0xc0008be440}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 677
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 1204 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00181b800, 0xc001565e60)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 829
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1205 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc001389200)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1191
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2585 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000003380, 0xc000524fc0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 676
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 939 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 938
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2269 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00130dd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00130dd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc00130dd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc00130dd40, 0x11bffe98)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2254 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013fe9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013fe9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013fe9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013fe9c0, 0xc000744700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 937 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0008d2e50, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x116f5440?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0018b67e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008d2ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006c87c0, {0x11c0c0c0, 0xc000c494d0}, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006c87c0, 0x3b9aca00, 0x0, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 945
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2257 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00130d040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00130d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc00130d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc00130d040, 0x11bfff00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2267 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00130d380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00130d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc00130d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc00130d380, 0x11bfff08)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2185 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00130c9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00130c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc00130c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc00130c9c0, 0x11bffed0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2274 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013ff1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013ff1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013ff1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013ff1e0, 0xc000744900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2184 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00130c820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00130c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc00130c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc00130c820, 0x11bffec0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 677 [syscall, 5 minutes]:
syscall.syscall6(0xc001a3bf80?, 0x1000000000010?, 0x10000000019?, 0x138dfe08?, 0x90?, 0x138d4108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc001303a40?, 0xe9a90c5?, 0x90?, 0x11b6c420?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xead99e5?, 0xc001303a74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0014f03c0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000205680)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000205680)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc000651380, 0xc000205680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc000651380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc000651380, 0x11bffdd0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2584 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x5a8b4508, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0019ee9c0?, 0xc000891200?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0019ee9c0, {0xc000891200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0019b6120, {0xc000891200?, 0xc000685180?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001d22810, {0x11c0aad8, 0xc0008be430})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x11c0ac18, 0xc001d22810}, {0x11c0aad8, 0xc0008be430}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001323678?, {0x11c0ac18, 0xc001d22810})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x12f522f0?, {0x11c0ac18?, 0xc001d22810?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x11c0ac18, 0xc001d22810}, {0x11c0ab98, 0xc0019b6120}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000067080?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 676
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2183 [chan receive, 30 minutes]:
testing.(*T).Run(0xc00130c000, {0x1054748a?, 0x5559c3d061e?}, 0xc0013c4060)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00130c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00130c000, 0x11bffeb8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2250 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013fe1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013fe1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013fe1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013fe1a0, 0xc000744400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2273 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013feea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013feea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013feea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013feea0, 0xc000744880)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2252 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013fe4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013fe4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013fe4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013fe4e0, 0xc000744600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 756 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0x5a8b4220, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0008b3300?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0008b3300)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0008b3300)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0006fd3e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0006fd3e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc000ae25a0, {0x11c22cb0, 0xc0006fd3e0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc000ae25a0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00130c680?, 0xc00130c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 753
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1206 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc001389200)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1191
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2525 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x5a8b4600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001a1c660?, 0xc0013cda96?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001a1c660, {0xc0013cda96, 0x56a, 0x56a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0008be428, {0xc0013cda96?, 0xc001526fc0?, 0x22c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001a3a660, {0x11c0aad8, 0xc0019b60b8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x11c0ac18, 0xc001a3a660}, {0x11c0aad8, 0xc0019b60b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0008ab678?, {0x11c0ac18, 0xc001a3a660})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x12f522f0?, {0x11c0ac18?, 0xc001a3a660?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x11c0ac18, 0xc001a3a660}, {0x11c0ab98, 0xc0008be428}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0019541e0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 677
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 938 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x11c2fdc0, 0xc0000662a0}, 0xc000093f50, 0xc0013e3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x11c2fdc0, 0xc0000662a0}, 0xa0?, 0xc000093f50, 0xc000093f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x11c2fdc0?, 0xc0000662a0?}, 0xc001385860?, 0xeadc6a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xeb22945?, 0xc001462900?, 0xc001f002a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 945
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 925 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001462d80, 0xc001f00540)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 924
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 928 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0018b6900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 927
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 945 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008d2ec0, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 927
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2256 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00059a0f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013fed00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013fed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013fed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013fed00, 0xc000744800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2249
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    
x
+
TestDockerFlags (751.2s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-063000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0718 22:09:32.765781    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 22:10:44.389943    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 22:14:15.821243    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 22:14:32.763420    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 22:15:44.387104    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 22:19:32.760992    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-063000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m30.332374987s)

                                                
                                                
-- stdout --
	* [docker-flags-063000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-063000" primary control-plane node in "docker-flags-063000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-063000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 22:07:51.917670   12357 out.go:291] Setting OutFile to fd 1 ...
	I0718 22:07:51.918408   12357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 22:07:51.918416   12357 out.go:304] Setting ErrFile to fd 2...
	I0718 22:07:51.918422   12357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 22:07:51.919007   12357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 22:07:51.920558   12357 out.go:298] Setting JSON to false
	I0718 22:07:51.943122   12357 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7644,"bootTime":1721358027,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 22:07:51.943230   12357 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 22:07:51.964797   12357 out.go:177] * [docker-flags-063000] minikube v1.33.1 on Darwin 14.5
	I0718 22:07:52.007702   12357 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 22:07:52.007749   12357 notify.go:220] Checking for updates...
	I0718 22:07:52.050508   12357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 22:07:52.071698   12357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 22:07:52.092595   12357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 22:07:52.113723   12357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	I0718 22:07:52.134692   12357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 22:07:52.156417   12357 config.go:182] Loaded profile config "force-systemd-flag-274000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 22:07:52.156588   12357 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 22:07:52.180512   12357 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 22:07:52.180695   12357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 22:07:52.258194   12357 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:114 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-19 05:07:52.249288047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 22:07:52.301076   12357 out.go:177] * Using the docker driver based on user configuration
	I0718 22:07:52.324119   12357 start.go:297] selected driver: docker
	I0718 22:07:52.324145   12357 start.go:901] validating driver "docker" against <nil>
	I0718 22:07:52.324164   12357 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 22:07:52.328815   12357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 22:07:52.407858   12357 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:114 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-19 05:07:52.399087083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 22:07:52.408046   12357 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 22:07:52.408236   12357 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0718 22:07:52.430001   12357 out.go:177] * Using Docker Desktop driver with root privileges
	I0718 22:07:52.451709   12357 cni.go:84] Creating CNI manager for ""
	I0718 22:07:52.451756   12357 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 22:07:52.451769   12357 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 22:07:52.451903   12357 start.go:340] cluster config:
	{Name:docker-flags-063000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 22:07:52.473463   12357 out.go:177] * Starting "docker-flags-063000" primary control-plane node in "docker-flags-063000" cluster
	I0718 22:07:52.515715   12357 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 22:07:52.537621   12357 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0718 22:07:52.579713   12357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 22:07:52.579763   12357 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 22:07:52.579797   12357 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 22:07:52.579827   12357 cache.go:56] Caching tarball of preloaded images
	I0718 22:07:52.580082   12357 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 22:07:52.580102   12357 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 22:07:52.581059   12357 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/docker-flags-063000/config.json ...
	I0718 22:07:52.581213   12357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/docker-flags-063000/config.json: {Name:mk97a1d10646431a9fd6cf0fffb429a121e6ab3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0718 22:07:52.606299   12357 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0718 22:07:52.606325   12357 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 22:07:52.606497   12357 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 22:07:52.606517   12357 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0718 22:07:52.606525   12357 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0718 22:07:52.606535   12357 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 22:07:52.606540   12357 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0718 22:07:52.609554   12357 image.go:273] response: 
	I0718 22:07:52.743702   12357 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0718 22:07:52.743758   12357 cache.go:194] Successfully downloaded all kic artifacts
	I0718 22:07:52.743808   12357 start.go:360] acquireMachinesLock for docker-flags-063000: {Name:mkb33a7edfb02e2184524500f407111766a0ae27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 22:07:52.743978   12357 start.go:364] duration metric: took 157.79µs to acquireMachinesLock for "docker-flags-063000"
	I0718 22:07:52.744007   12357 start.go:93] Provisioning new machine with config: &{Name:docker-flags-063000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-063000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 22:07:52.744085   12357 start.go:125] createHost starting for "" (driver="docker")
	I0718 22:07:52.786108   12357 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0718 22:07:52.786300   12357 start.go:159] libmachine.API.Create for "docker-flags-063000" (driver="docker")
	I0718 22:07:52.786330   12357 client.go:168] LocalClient.Create starting
	I0718 22:07:52.786453   12357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 22:07:52.786506   12357 main.go:141] libmachine: Decoding PEM data...
	I0718 22:07:52.786524   12357 main.go:141] libmachine: Parsing certificate...
	I0718 22:07:52.786573   12357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 22:07:52.786612   12357 main.go:141] libmachine: Decoding PEM data...
	I0718 22:07:52.786621   12357 main.go:141] libmachine: Parsing certificate...
	I0718 22:07:52.787122   12357 cli_runner.go:164] Run: docker network inspect docker-flags-063000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 22:07:52.804739   12357 cli_runner.go:211] docker network inspect docker-flags-063000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 22:07:52.804832   12357 network_create.go:284] running [docker network inspect docker-flags-063000] to gather additional debugging logs...
	I0718 22:07:52.804856   12357 cli_runner.go:164] Run: docker network inspect docker-flags-063000
	W0718 22:07:52.822331   12357 cli_runner.go:211] docker network inspect docker-flags-063000 returned with exit code 1
	I0718 22:07:52.822363   12357 network_create.go:287] error running [docker network inspect docker-flags-063000]: docker network inspect docker-flags-063000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-063000 not found
	I0718 22:07:52.822377   12357 network_create.go:289] output of [docker network inspect docker-flags-063000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-063000 not found
	
	** /stderr **
	I0718 22:07:52.822499   12357 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:07:52.841327   12357 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:07:52.842737   12357 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:07:52.844326   12357 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:07:52.844677   12357 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015177d0}
	I0718 22:07:52.844692   12357 network_create.go:124] attempt to create docker network docker-flags-063000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0718 22:07:52.844768   12357 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-063000 docker-flags-063000
	W0718 22:07:52.862675   12357 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-063000 docker-flags-063000 returned with exit code 1
	W0718 22:07:52.862719   12357 network_create.go:149] failed to create docker network docker-flags-063000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-063000 docker-flags-063000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0718 22:07:52.862741   12357 network_create.go:116] failed to create docker network docker-flags-063000 192.168.76.0/24, will retry: subnet is taken
	I0718 22:07:52.864339   12357 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:07:52.864708   12357 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001560750}
	I0718 22:07:52.864726   12357 network_create.go:124] attempt to create docker network docker-flags-063000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0718 22:07:52.864794   12357 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-063000 docker-flags-063000
	I0718 22:07:52.928384   12357 network_create.go:108] docker network docker-flags-063000 192.168.85.0/24 created
	I0718 22:07:52.928434   12357 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-063000" container
	I0718 22:07:52.928565   12357 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 22:07:52.948080   12357 cli_runner.go:164] Run: docker volume create docker-flags-063000 --label name.minikube.sigs.k8s.io=docker-flags-063000 --label created_by.minikube.sigs.k8s.io=true
	I0718 22:07:52.966215   12357 oci.go:103] Successfully created a docker volume docker-flags-063000
	I0718 22:07:52.966354   12357 cli_runner.go:164] Run: docker run --rm --name docker-flags-063000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-063000 --entrypoint /usr/bin/test -v docker-flags-063000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 22:07:53.391397   12357 oci.go:107] Successfully prepared a docker volume docker-flags-063000
	I0718 22:07:53.391454   12357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 22:07:53.391481   12357 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 22:07:53.391583   12357 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-063000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 22:13:52.784020   12357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:13:52.784214   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:13:52.805108   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:13:52.805234   12357 retry.go:31] will retry after 189.568236ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:13:52.997167   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:13:53.017333   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:13:53.017451   12357 retry.go:31] will retry after 506.906451ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:13:53.524672   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:13:53.543924   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:13:53.544034   12357 retry.go:31] will retry after 835.38262ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:13:54.380728   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:13:54.400667   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	W0718 22:13:54.400813   12357 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	
	W0718 22:13:54.400847   12357 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:13:54.400929   12357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:13:54.400988   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:13:54.420357   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:13:54.420465   12357 retry.go:31] will retry after 321.34832ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:13:54.744197   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:13:54.764948   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:13:54.765039   12357 retry.go:31] will retry after 197.887892ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:13:54.963841   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:13:54.982777   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:13:54.982871   12357 retry.go:31] will retry after 571.069271ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:13:55.554186   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:13:55.573535   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	W0718 22:13:55.573645   12357 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	
	W0718 22:13:55.573669   12357 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:13:55.573707   12357 start.go:128] duration metric: took 6m2.832268512s to createHost
	I0718 22:13:55.573714   12357 start.go:83] releasing machines lock for "docker-flags-063000", held for 6m2.832387404s
	W0718 22:13:55.573744   12357 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0718 22:13:55.574228   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:13:55.592289   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:55.592357   12357 delete.go:82] Unable to get host status for docker-flags-063000, assuming it has already been deleted: state: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	W0718 22:13:55.592468   12357 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0718 22:13:55.592480   12357 start.go:729] Will try again in 5 seconds ...
	I0718 22:14:00.594567   12357 start.go:360] acquireMachinesLock for docker-flags-063000: {Name:mkb33a7edfb02e2184524500f407111766a0ae27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 22:14:00.595424   12357 start.go:364] duration metric: took 755.46µs to acquireMachinesLock for "docker-flags-063000"
	I0718 22:14:00.595594   12357 start.go:96] Skipping create...Using existing machine configuration
	I0718 22:14:00.595617   12357 fix.go:54] fixHost starting: 
	I0718 22:14:00.596187   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:00.616784   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:00.616832   12357 fix.go:112] recreateIfNeeded on docker-flags-063000: state= err=unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:00.616849   12357 fix.go:117] machineExists: false. err=machine does not exist
	I0718 22:14:00.638803   12357 out.go:177] * docker "docker-flags-063000" container is missing, will recreate.
	I0718 22:14:00.660595   12357 delete.go:124] DEMOLISHING docker-flags-063000 ...
	I0718 22:14:00.660769   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:00.680445   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	W0718 22:14:00.680505   12357 stop.go:83] unable to get state: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:00.680528   12357 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:00.680926   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:00.698711   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:00.698768   12357 delete.go:82] Unable to get host status for docker-flags-063000, assuming it has already been deleted: state: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:00.698849   12357 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-063000
	W0718 22:14:00.716244   12357 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-063000 returned with exit code 1
	I0718 22:14:00.716283   12357 kic.go:371] could not find the container docker-flags-063000 to remove it. will try anyways
	I0718 22:14:00.716358   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:00.733752   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	W0718 22:14:00.733800   12357 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:00.733884   12357 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-063000 /bin/bash -c "sudo init 0"
	W0718 22:14:00.751894   12357 cli_runner.go:211] docker exec --privileged -t docker-flags-063000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 22:14:00.751959   12357 oci.go:650] error shutdown docker-flags-063000: docker exec --privileged -t docker-flags-063000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:01.753938   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:01.773584   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:01.773630   12357 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:01.773645   12357 oci.go:664] temporary error: container docker-flags-063000 status is  but expect it to be exited
	I0718 22:14:01.773668   12357 retry.go:31] will retry after 396.194351ms: couldn't verify container is exited. %v: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:02.171907   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:02.191435   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:02.191483   12357 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:02.191495   12357 oci.go:664] temporary error: container docker-flags-063000 status is  but expect it to be exited
	I0718 22:14:02.191520   12357 retry.go:31] will retry after 763.413477ms: couldn't verify container is exited. %v: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:02.957319   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:02.977451   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:02.977501   12357 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:02.977516   12357 oci.go:664] temporary error: container docker-flags-063000 status is  but expect it to be exited
	I0718 22:14:02.977540   12357 retry.go:31] will retry after 782.548618ms: couldn't verify container is exited. %v: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:03.761062   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:03.781998   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:03.782046   12357 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:03.782057   12357 oci.go:664] temporary error: container docker-flags-063000 status is  but expect it to be exited
	I0718 22:14:03.782103   12357 retry.go:31] will retry after 1.593217921s: couldn't verify container is exited. %v: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:05.377454   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:05.397562   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:05.397613   12357 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:05.397628   12357 oci.go:664] temporary error: container docker-flags-063000 status is  but expect it to be exited
	I0718 22:14:05.397649   12357 retry.go:31] will retry after 1.975233197s: couldn't verify container is exited. %v: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:07.373076   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:07.391798   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:07.391849   12357 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:07.391861   12357 oci.go:664] temporary error: container docker-flags-063000 status is  but expect it to be exited
	I0718 22:14:07.391886   12357 retry.go:31] will retry after 4.180487704s: couldn't verify container is exited. %v: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:11.572670   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:11.592221   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:11.592271   12357 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:11.592282   12357 oci.go:664] temporary error: container docker-flags-063000 status is  but expect it to be exited
	I0718 22:14:11.592308   12357 retry.go:31] will retry after 4.250864307s: couldn't verify container is exited. %v: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:15.844706   12357 cli_runner.go:164] Run: docker container inspect docker-flags-063000 --format={{.State.Status}}
	W0718 22:14:15.863724   12357 cli_runner.go:211] docker container inspect docker-flags-063000 --format={{.State.Status}} returned with exit code 1
	I0718 22:14:15.863786   12357 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:14:15.863798   12357 oci.go:664] temporary error: container docker-flags-063000 status is  but expect it to be exited
	I0718 22:14:15.863829   12357 oci.go:88] couldn't shut down docker-flags-063000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	 
	I0718 22:14:15.863897   12357 cli_runner.go:164] Run: docker rm -f -v docker-flags-063000
	I0718 22:14:15.882846   12357 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-063000
	W0718 22:14:15.901087   12357 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-063000 returned with exit code 1
	I0718 22:14:15.901200   12357 cli_runner.go:164] Run: docker network inspect docker-flags-063000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:14:15.919417   12357 cli_runner.go:164] Run: docker network rm docker-flags-063000
	I0718 22:14:16.001364   12357 fix.go:124] Sleeping 1 second for extra luck!
	I0718 22:14:17.001508   12357 start.go:125] createHost starting for "" (driver="docker")
	I0718 22:14:17.023302   12357 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0718 22:14:17.023460   12357 start.go:159] libmachine.API.Create for "docker-flags-063000" (driver="docker")
	I0718 22:14:17.023483   12357 client.go:168] LocalClient.Create starting
	I0718 22:14:17.023681   12357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 22:14:17.023772   12357 main.go:141] libmachine: Decoding PEM data...
	I0718 22:14:17.023797   12357 main.go:141] libmachine: Parsing certificate...
	I0718 22:14:17.023881   12357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 22:14:17.023955   12357 main.go:141] libmachine: Decoding PEM data...
	I0718 22:14:17.023970   12357 main.go:141] libmachine: Parsing certificate...
	I0718 22:14:17.044892   12357 cli_runner.go:164] Run: docker network inspect docker-flags-063000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 22:14:17.063866   12357 cli_runner.go:211] docker network inspect docker-flags-063000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 22:14:17.063982   12357 network_create.go:284] running [docker network inspect docker-flags-063000] to gather additional debugging logs...
	I0718 22:14:17.063999   12357 cli_runner.go:164] Run: docker network inspect docker-flags-063000
	W0718 22:14:17.081832   12357 cli_runner.go:211] docker network inspect docker-flags-063000 returned with exit code 1
	I0718 22:14:17.081861   12357 network_create.go:287] error running [docker network inspect docker-flags-063000]: docker network inspect docker-flags-063000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-063000 not found
	I0718 22:14:17.081877   12357 network_create.go:289] output of [docker network inspect docker-flags-063000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-063000 not found
	
	** /stderr **
	I0718 22:14:17.082027   12357 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:14:17.102772   12357 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:14:17.104351   12357 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:14:17.105689   12357 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:14:17.107106   12357 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:14:17.108645   12357 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:14:17.110243   12357 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:14:17.110623   12357 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00054b480}
	I0718 22:14:17.110635   12357 network_create.go:124] attempt to create docker network docker-flags-063000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0718 22:14:17.110722   12357 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-063000 docker-flags-063000
	I0718 22:14:17.180657   12357 network_create.go:108] docker network docker-flags-063000 192.168.103.0/24 created
	I0718 22:14:17.180697   12357 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-063000" container
	I0718 22:14:17.180817   12357 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 22:14:17.202223   12357 cli_runner.go:164] Run: docker volume create docker-flags-063000 --label name.minikube.sigs.k8s.io=docker-flags-063000 --label created_by.minikube.sigs.k8s.io=true
	I0718 22:14:17.221086   12357 oci.go:103] Successfully created a docker volume docker-flags-063000
	I0718 22:14:17.221230   12357 cli_runner.go:164] Run: docker run --rm --name docker-flags-063000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-063000 --entrypoint /usr/bin/test -v docker-flags-063000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 22:14:17.491568   12357 oci.go:107] Successfully prepared a docker volume docker-flags-063000
	I0718 22:14:17.491611   12357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 22:14:17.491627   12357 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 22:14:17.491761   12357 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-063000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 22:20:17.108070   12357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:20:17.108199   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:17.128050   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:17.128164   12357 retry.go:31] will retry after 325.821141ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:17.456396   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:17.475568   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:17.475662   12357 retry.go:31] will retry after 264.020554ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:17.741555   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:17.760508   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:17.760605   12357 retry.go:31] will retry after 603.078523ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:18.364064   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:18.383498   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	W0718 22:20:18.383602   12357 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	
	W0718 22:20:18.383622   12357 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:18.383688   12357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:20:18.383750   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:18.401362   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:18.401462   12357 retry.go:31] will retry after 289.836519ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:18.693702   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:18.713468   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:18.713579   12357 retry.go:31] will retry after 286.348213ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:19.002394   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:19.022150   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:19.022270   12357 retry.go:31] will retry after 771.265825ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:19.794847   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:19.815017   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	W0718 22:20:19.815122   12357 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	
	W0718 22:20:19.815145   12357 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:19.815157   12357 start.go:128] duration metric: took 6m2.731406582s to createHost
	I0718 22:20:19.815235   12357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:20:19.815283   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:19.833114   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:19.833210   12357 retry.go:31] will retry after 130.963244ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:19.964739   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:19.984484   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:19.984588   12357 retry.go:31] will retry after 426.424733ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:20.411681   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:20.430526   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:20.430617   12357 retry.go:31] will retry after 411.905379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:20.843073   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:20.863120   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	W0718 22:20:20.863225   12357 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	
	W0718 22:20:20.863246   12357 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:20.863315   12357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:20:20.863371   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:20.880390   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:20.880480   12357 retry.go:31] will retry after 190.339431ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:21.071133   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:21.090859   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:21.090952   12357 retry.go:31] will retry after 364.438061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:21.457739   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:21.478047   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	I0718 22:20:21.478146   12357 retry.go:31] will retry after 605.594504ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:22.084329   12357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000
	W0718 22:20:22.103892   12357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000 returned with exit code 1
	W0718 22:20:22.103989   12357 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	
	W0718 22:20:22.104017   12357 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-063000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-063000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	I0718 22:20:22.104033   12357 fix.go:56] duration metric: took 6m21.426336949s for fixHost
	I0718 22:20:22.104039   12357 start.go:83] releasing machines lock for "docker-flags-063000", held for 6m21.426489983s
	W0718 22:20:22.104107   12357 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-063000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-063000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0718 22:20:22.147604   12357 out.go:177] 
	W0718 22:20:22.169675   12357 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0718 22:20:22.169739   12357 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0718 22:20:22.169762   12357 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0718 22:20:22.211624   12357 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-063000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-063000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-063000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (161.287865ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-063000 host status: state: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-063000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-063000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-063000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (162.313122ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-063000 host status: state: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-063000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-063000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-18 22:20:22.611357 -0700 PDT m=+6924.094864027
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-063000
helpers_test.go:235: (dbg) docker inspect docker-flags-063000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-063000",
	        "Id": "16e05895fbb38398506dc3680c327b992de8ba322416a112949b8cb5c17eeed5",
	        "Created": "2024-07-19T05:14:17.12736527Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-063000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-063000 -n docker-flags-063000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-063000 -n docker-flags-063000: exit status 7 (73.450922ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 22:20:22.704471   12712 status.go:249] status error: host: state: unknown state "docker-flags-063000": docker container inspect docker-flags-063000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-063000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-063000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-063000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-063000
--- FAIL: TestDockerFlags (751.20s)

                                                
                                    
x
+
TestForceSystemdFlag (751.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-274000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-274000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m31.146349832s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-274000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-274000" primary control-plane node in "force-systemd-flag-274000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-274000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 22:07:19.083275   12276 out.go:291] Setting OutFile to fd 1 ...
	I0718 22:07:19.083537   12276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 22:07:19.083543   12276 out.go:304] Setting ErrFile to fd 2...
	I0718 22:07:19.083546   12276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 22:07:19.083707   12276 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 22:07:19.085261   12276 out.go:298] Setting JSON to false
	I0718 22:07:19.108110   12276 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7612,"bootTime":1721358027,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 22:07:19.108202   12276 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 22:07:19.130659   12276 out.go:177] * [force-systemd-flag-274000] minikube v1.33.1 on Darwin 14.5
	I0718 22:07:19.172263   12276 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 22:07:19.172346   12276 notify.go:220] Checking for updates...
	I0718 22:07:19.214221   12276 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 22:07:19.256405   12276 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 22:07:19.299338   12276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 22:07:19.342035   12276 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	I0718 22:07:19.386548   12276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 22:07:19.408602   12276 config.go:182] Loaded profile config "force-systemd-env-097000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 22:07:19.408707   12276 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 22:07:19.433125   12276 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 22:07:19.433296   12276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 22:07:19.513778   12276 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:110 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-19 05:07:19.504711788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 22:07:19.557283   12276 out.go:177] * Using the docker driver based on user configuration
	I0718 22:07:19.578199   12276 start.go:297] selected driver: docker
	I0718 22:07:19.578217   12276 start.go:901] validating driver "docker" against <nil>
	I0718 22:07:19.578234   12276 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 22:07:19.582554   12276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 22:07:19.663249   12276 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:110 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-19 05:07:19.654158773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 22:07:19.663434   12276 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 22:07:19.663622   12276 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 22:07:19.685512   12276 out.go:177] * Using Docker Desktop driver with root privileges
	I0718 22:07:19.707211   12276 cni.go:84] Creating CNI manager for ""
	I0718 22:07:19.707259   12276 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 22:07:19.707279   12276 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 22:07:19.707412   12276 start.go:340] cluster config:
	{Name:force-systemd-flag-274000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-274000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 22:07:19.729020   12276 out.go:177] * Starting "force-systemd-flag-274000" primary control-plane node in "force-systemd-flag-274000" cluster
	I0718 22:07:19.771102   12276 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 22:07:19.793049   12276 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0718 22:07:19.815094   12276 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 22:07:19.815178   12276 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 22:07:19.815205   12276 cache.go:56] Caching tarball of preloaded images
	I0718 22:07:19.815190   12276 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 22:07:19.815459   12276 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 22:07:19.815479   12276 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 22:07:19.816357   12276 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/force-systemd-flag-274000/config.json ...
	I0718 22:07:19.816565   12276 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/force-systemd-flag-274000/config.json: {Name:mk3ef350ca5e16db2a2b789c45db1945a3214ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0718 22:07:19.842893   12276 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0718 22:07:19.842909   12276 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 22:07:19.843056   12276 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 22:07:19.843074   12276 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0718 22:07:19.843080   12276 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0718 22:07:19.843090   12276 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 22:07:19.843094   12276 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0718 22:07:19.846468   12276 image.go:273] response: 
	I0718 22:07:19.976008   12276 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0718 22:07:19.976186   12276 cache.go:194] Successfully downloaded all kic artifacts
	I0718 22:07:19.976254   12276 start.go:360] acquireMachinesLock for force-systemd-flag-274000: {Name:mkc03786f7a8cfe9857cfa519ff6b8e0a2127125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 22:07:19.976425   12276 start.go:364] duration metric: took 156.914µs to acquireMachinesLock for "force-systemd-flag-274000"
	I0718 22:07:19.976456   12276 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-274000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-274000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 22:07:19.976537   12276 start.go:125] createHost starting for "" (driver="docker")
	I0718 22:07:20.018896   12276 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0718 22:07:20.019092   12276 start.go:159] libmachine.API.Create for "force-systemd-flag-274000" (driver="docker")
	I0718 22:07:20.019120   12276 client.go:168] LocalClient.Create starting
	I0718 22:07:20.019249   12276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 22:07:20.019303   12276 main.go:141] libmachine: Decoding PEM data...
	I0718 22:07:20.019321   12276 main.go:141] libmachine: Parsing certificate...
	I0718 22:07:20.019376   12276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 22:07:20.019415   12276 main.go:141] libmachine: Decoding PEM data...
	I0718 22:07:20.019429   12276 main.go:141] libmachine: Parsing certificate...
	I0718 22:07:20.019911   12276 cli_runner.go:164] Run: docker network inspect force-systemd-flag-274000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 22:07:20.037810   12276 cli_runner.go:211] docker network inspect force-systemd-flag-274000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 22:07:20.037925   12276 network_create.go:284] running [docker network inspect force-systemd-flag-274000] to gather additional debugging logs...
	I0718 22:07:20.037940   12276 cli_runner.go:164] Run: docker network inspect force-systemd-flag-274000
	W0718 22:07:20.055536   12276 cli_runner.go:211] docker network inspect force-systemd-flag-274000 returned with exit code 1
	I0718 22:07:20.055566   12276 network_create.go:287] error running [docker network inspect force-systemd-flag-274000]: docker network inspect force-systemd-flag-274000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-274000 not found
	I0718 22:07:20.055594   12276 network_create.go:289] output of [docker network inspect force-systemd-flag-274000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-274000 not found
	
	** /stderr **
	I0718 22:07:20.055754   12276 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:07:20.074989   12276 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:07:20.076646   12276 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:07:20.077011   12276 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00160c6b0}
	I0718 22:07:20.077029   12276 network_create.go:124] attempt to create docker network force-systemd-flag-274000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0718 22:07:20.077108   12276 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-274000 force-systemd-flag-274000
	I0718 22:07:20.142653   12276 network_create.go:108] docker network force-systemd-flag-274000 192.168.67.0/24 created
	I0718 22:07:20.142696   12276 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-274000" container
	I0718 22:07:20.142798   12276 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 22:07:20.200983   12276 cli_runner.go:164] Run: docker volume create force-systemd-flag-274000 --label name.minikube.sigs.k8s.io=force-systemd-flag-274000 --label created_by.minikube.sigs.k8s.io=true
	I0718 22:07:20.225659   12276 oci.go:103] Successfully created a docker volume force-systemd-flag-274000
	I0718 22:07:20.225813   12276 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-274000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-274000 --entrypoint /usr/bin/test -v force-systemd-flag-274000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 22:07:20.639452   12276 oci.go:107] Successfully prepared a docker volume force-systemd-flag-274000
	I0718 22:07:20.639501   12276 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 22:07:20.639518   12276 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 22:07:20.639655   12276 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-274000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 22:13:20.016894   12276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:13:20.016979   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:20.036060   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:20.036160   12276 retry.go:31] will retry after 310.231237ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:20.346827   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:20.366981   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:20.367144   12276 retry.go:31] will retry after 205.573171ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:20.575085   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:20.594181   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:20.594274   12276 retry.go:31] will retry after 616.898768ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:21.211608   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:21.231656   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:21.231765   12276 retry.go:31] will retry after 540.789023ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:21.774127   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:21.793871   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	W0718 22:13:21.794001   12276 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	
	W0718 22:13:21.794026   12276 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:21.794106   12276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:13:21.794207   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:21.811858   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:21.811954   12276 retry.go:31] will retry after 203.788153ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:22.016229   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:22.035025   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:22.035126   12276 retry.go:31] will retry after 457.672756ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:22.493305   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:22.513310   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:22.513405   12276 retry.go:31] will retry after 463.312522ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:22.978493   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:22.996839   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:22.996930   12276 retry.go:31] will retry after 438.089501ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:23.436805   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:13:23.456569   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	W0718 22:13:23.456667   12276 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	
	W0718 22:13:23.456680   12276 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:23.456696   12276 start.go:128] duration metric: took 6m3.482814419s to createHost
	I0718 22:13:23.456703   12276 start.go:83] releasing machines lock for "force-systemd-flag-274000", held for 6m3.482933654s
	W0718 22:13:23.456718   12276 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0718 22:13:23.457168   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:23.476260   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:23.476325   12276 delete.go:82] Unable to get host status for force-systemd-flag-274000, assuming it has already been deleted: state: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	W0718 22:13:23.476430   12276 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0718 22:13:23.476441   12276 start.go:729] Will try again in 5 seconds ...
	I0718 22:13:28.476711   12276 start.go:360] acquireMachinesLock for force-systemd-flag-274000: {Name:mkc03786f7a8cfe9857cfa519ff6b8e0a2127125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 22:13:28.476898   12276 start.go:364] duration metric: took 131.176µs to acquireMachinesLock for "force-systemd-flag-274000"
	I0718 22:13:28.476932   12276 start.go:96] Skipping create...Using existing machine configuration
	I0718 22:13:28.476949   12276 fix.go:54] fixHost starting: 
	I0718 22:13:28.477335   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:28.497485   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:28.497549   12276 fix.go:112] recreateIfNeeded on force-systemd-flag-274000: state= err=unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:28.497569   12276 fix.go:117] machineExists: false. err=machine does not exist
	I0718 22:13:28.537203   12276 out.go:177] * docker "force-systemd-flag-274000" container is missing, will recreate.
	I0718 22:13:28.610040   12276 delete.go:124] DEMOLISHING force-systemd-flag-274000 ...
	I0718 22:13:28.610225   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:28.629959   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	W0718 22:13:28.630005   12276 stop.go:83] unable to get state: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:28.630022   12276 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:28.630420   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:28.648896   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:28.649012   12276 delete.go:82] Unable to get host status for force-systemd-flag-274000, assuming it has already been deleted: state: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:28.649140   12276 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-274000
	W0718 22:13:28.667415   12276 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:28.667450   12276 kic.go:371] could not find the container force-systemd-flag-274000 to remove it. will try anyways
	I0718 22:13:28.667529   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:28.685847   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	W0718 22:13:28.685900   12276 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:28.685985   12276 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-274000 /bin/bash -c "sudo init 0"
	W0718 22:13:28.703630   12276 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-274000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 22:13:28.703664   12276 oci.go:650] error shutdown force-systemd-flag-274000: docker exec --privileged -t force-systemd-flag-274000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:29.704003   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:29.724777   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:29.724829   12276 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:29.724840   12276 oci.go:664] temporary error: container force-systemd-flag-274000 status is  but expect it to be exited
	I0718 22:13:29.724866   12276 retry.go:31] will retry after 743.069008ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:30.468377   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:30.489739   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:30.489785   12276 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:30.489797   12276 oci.go:664] temporary error: container force-systemd-flag-274000 status is  but expect it to be exited
	I0718 22:13:30.489823   12276 retry.go:31] will retry after 974.713136ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:31.464988   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:31.485136   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:31.485190   12276 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:31.485203   12276 oci.go:664] temporary error: container force-systemd-flag-274000 status is  but expect it to be exited
	I0718 22:13:31.485236   12276 retry.go:31] will retry after 1.011750571s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:32.497486   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:32.519349   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:32.519403   12276 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:32.519433   12276 oci.go:664] temporary error: container force-systemd-flag-274000 status is  but expect it to be exited
	I0718 22:13:32.519457   12276 retry.go:31] will retry after 1.689247952s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:34.209556   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:34.229049   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:34.229122   12276 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:34.229135   12276 oci.go:664] temporary error: container force-systemd-flag-274000 status is  but expect it to be exited
	I0718 22:13:34.229160   12276 retry.go:31] will retry after 1.553192347s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:35.784693   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:35.805426   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:35.805484   12276 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:35.805496   12276 oci.go:664] temporary error: container force-systemd-flag-274000 status is  but expect it to be exited
	I0718 22:13:35.805529   12276 retry.go:31] will retry after 2.516545345s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:38.322486   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:38.342825   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:38.342886   12276 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:38.342905   12276 oci.go:664] temporary error: container force-systemd-flag-274000 status is  but expect it to be exited
	I0718 22:13:38.342931   12276 retry.go:31] will retry after 4.97741109s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:43.320688   12276 cli_runner.go:164] Run: docker container inspect force-systemd-flag-274000 --format={{.State.Status}}
	W0718 22:13:43.340989   12276 cli_runner.go:211] docker container inspect force-systemd-flag-274000 --format={{.State.Status}} returned with exit code 1
	I0718 22:13:43.341035   12276 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:13:43.341045   12276 oci.go:664] temporary error: container force-systemd-flag-274000 status is  but expect it to be exited
	I0718 22:13:43.341074   12276 oci.go:88] couldn't shut down force-systemd-flag-274000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	 
	I0718 22:13:43.341146   12276 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-274000
	I0718 22:13:43.360652   12276 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-274000
	W0718 22:13:43.378753   12276 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:43.378867   12276 cli_runner.go:164] Run: docker network inspect force-systemd-flag-274000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:13:43.397365   12276 cli_runner.go:164] Run: docker network rm force-systemd-flag-274000
	I0718 22:13:43.474515   12276 fix.go:124] Sleeping 1 second for extra luck!
	I0718 22:13:44.475989   12276 start.go:125] createHost starting for "" (driver="docker")
	I0718 22:13:44.499343   12276 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0718 22:13:44.499523   12276 start.go:159] libmachine.API.Create for "force-systemd-flag-274000" (driver="docker")
	I0718 22:13:44.499553   12276 client.go:168] LocalClient.Create starting
	I0718 22:13:44.499781   12276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 22:13:44.499887   12276 main.go:141] libmachine: Decoding PEM data...
	I0718 22:13:44.499915   12276 main.go:141] libmachine: Parsing certificate...
	I0718 22:13:44.500013   12276 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 22:13:44.500111   12276 main.go:141] libmachine: Decoding PEM data...
	I0718 22:13:44.500141   12276 main.go:141] libmachine: Parsing certificate...
	I0718 22:13:44.521848   12276 cli_runner.go:164] Run: docker network inspect force-systemd-flag-274000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 22:13:44.542404   12276 cli_runner.go:211] docker network inspect force-systemd-flag-274000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 22:13:44.542510   12276 network_create.go:284] running [docker network inspect force-systemd-flag-274000] to gather additional debugging logs...
	I0718 22:13:44.542528   12276 cli_runner.go:164] Run: docker network inspect force-systemd-flag-274000
	W0718 22:13:44.560667   12276 cli_runner.go:211] docker network inspect force-systemd-flag-274000 returned with exit code 1
	I0718 22:13:44.560699   12276 network_create.go:287] error running [docker network inspect force-systemd-flag-274000]: docker network inspect force-systemd-flag-274000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-274000 not found
	I0718 22:13:44.560712   12276 network_create.go:289] output of [docker network inspect force-systemd-flag-274000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-274000 not found
	
	** /stderr **
	I0718 22:13:44.560845   12276 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:13:44.580605   12276 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:13:44.582252   12276 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:13:44.583830   12276 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:13:44.585427   12276 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:13:44.587017   12276 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:13:44.587370   12276 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00154fb50}
	I0718 22:13:44.587384   12276 network_create.go:124] attempt to create docker network force-systemd-flag-274000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0718 22:13:44.587453   12276 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-274000 force-systemd-flag-274000
	I0718 22:13:44.655907   12276 network_create.go:108] docker network force-systemd-flag-274000 192.168.94.0/24 created
	I0718 22:13:44.655974   12276 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-274000" container
	I0718 22:13:44.656122   12276 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 22:13:44.677721   12276 cli_runner.go:164] Run: docker volume create force-systemd-flag-274000 --label name.minikube.sigs.k8s.io=force-systemd-flag-274000 --label created_by.minikube.sigs.k8s.io=true
	I0718 22:13:44.695771   12276 oci.go:103] Successfully created a docker volume force-systemd-flag-274000
	I0718 22:13:44.695913   12276 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-274000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-274000 --entrypoint /usr/bin/test -v force-systemd-flag-274000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 22:13:44.967285   12276 oci.go:107] Successfully prepared a docker volume force-systemd-flag-274000
	I0718 22:13:44.967337   12276 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 22:13:44.967353   12276 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 22:13:44.967460   12276 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-274000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 22:19:44.582071   12276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:19:44.582191   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:44.603865   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:44.603988   12276 retry.go:31] will retry after 317.490255ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:44.923874   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:44.944241   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:44.944371   12276 retry.go:31] will retry after 552.471772ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:45.499196   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:45.521566   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:45.521678   12276 retry.go:31] will retry after 355.375563ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:45.878971   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:45.899795   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	W0718 22:19:45.899914   12276 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	
	W0718 22:19:45.899932   12276 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:45.899989   12276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:19:45.900066   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:45.918211   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:45.918325   12276 retry.go:31] will retry after 370.288945ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:46.289029   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:46.309383   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:46.309575   12276 retry.go:31] will retry after 367.176453ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:46.677187   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:46.697646   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:46.697744   12276 retry.go:31] will retry after 364.193743ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:47.064277   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:47.084237   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	W0718 22:19:47.084379   12276 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	
	W0718 22:19:47.084401   12276 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:47.084415   12276 start.go:128] duration metric: took 6m2.526194179s to createHost
	I0718 22:19:47.084488   12276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:19:47.084552   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:47.102681   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:47.102773   12276 retry.go:31] will retry after 328.129427ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:47.431350   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:47.451619   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:47.451710   12276 retry.go:31] will retry after 392.394033ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:47.844472   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:47.864131   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:47.864232   12276 retry.go:31] will retry after 743.616525ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:48.608370   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:48.628188   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	W0718 22:19:48.628289   12276 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	
	W0718 22:19:48.628307   12276 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:48.628370   12276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:19:48.628436   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:48.646803   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:48.646910   12276 retry.go:31] will retry after 145.442546ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:48.792963   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:48.813290   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:48.813401   12276 retry.go:31] will retry after 478.702567ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:49.292999   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:49.312662   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	I0718 22:19:49.312764   12276 retry.go:31] will retry after 787.400513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:50.101835   12276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000
	W0718 22:19:50.122680   12276 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000 returned with exit code 1
	W0718 22:19:50.122783   12276 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	
	W0718 22:19:50.122804   12276 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-274000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-274000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	I0718 22:19:50.122831   12276 fix.go:56] duration metric: took 6m21.563842206s for fixHost
	I0718 22:19:50.122841   12276 start.go:83] releasing machines lock for "force-systemd-flag-274000", held for 6m21.563888451s
	W0718 22:19:50.122919   12276 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-274000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-274000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0718 22:19:50.166585   12276 out.go:177] 
	W0718 22:19:50.188672   12276 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0718 22:19:50.188774   12276 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0718 22:19:50.188809   12276 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0718 22:19:50.210339   12276 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-274000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-274000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-274000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (171.482618ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-274000 host status: state: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-274000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-18 22:19:50.495789 -0700 PDT m=+6891.979100495
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-274000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-274000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-274000",
	        "Id": "4a75d8ec5907df5ef490d98599a7a1306901c7e5fd8fcfc5c231a05ff738e1b2",
	        "Created": "2024-07-19T05:13:44.603881689Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-274000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-274000 -n force-systemd-flag-274000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-274000 -n force-systemd-flag-274000: exit status 7 (75.75826ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 22:19:50.594308   12633 status.go:249] status error: host: state: unknown state "force-systemd-flag-274000": docker container inspect force-systemd-flag-274000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-274000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-274000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-274000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-274000
--- FAIL: TestForceSystemdFlag (751.94s)

                                                
                                    
x
+
TestForceSystemdEnv (754.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-097000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0718 21:55:44.395910    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:57:35.827269    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:59:32.770881    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 22:00:44.394487    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 22:03:47.447513    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 22:04:32.768077    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 22:05:44.393625    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-097000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m33.920539202s)

                                                
                                                
-- stdout --
	* [force-systemd-env-097000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-097000" primary control-plane node in "force-systemd-env-097000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-097000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:55:17.263841   11694 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:55:17.264020   11694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:55:17.264025   11694 out.go:304] Setting ErrFile to fd 2...
	I0718 21:55:17.264029   11694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:55:17.264216   11694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:55:17.265688   11694 out.go:298] Setting JSON to false
	I0718 21:55:17.288055   11694 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6890,"bootTime":1721358027,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 21:55:17.288153   11694 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:55:17.310067   11694 out.go:177] * [force-systemd-env-097000] minikube v1.33.1 on Darwin 14.5
	I0718 21:55:17.351818   11694 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:55:17.351873   11694 notify.go:220] Checking for updates...
	I0718 21:55:17.394734   11694 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 21:55:17.415791   11694 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 21:55:17.438645   11694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:55:17.458724   11694 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	I0718 21:55:17.479634   11694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0718 21:55:17.501198   11694 config.go:182] Loaded profile config "offline-docker-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:55:17.501281   11694 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:55:17.524201   11694 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 21:55:17.524379   11694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:55:17.603878   11694 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-19 04:55:17.594949122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13
-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-
desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-pl
ugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:55:17.646370   11694 out.go:177] * Using the docker driver based on user configuration
	I0718 21:55:17.667570   11694 start.go:297] selected driver: docker
	I0718 21:55:17.667636   11694 start.go:901] validating driver "docker" against <nil>
	I0718 21:55:17.667653   11694 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:55:17.672115   11694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:55:17.750287   11694 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-19 04:55:17.74162476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:55:17.750470   11694 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:55:17.750651   11694 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 21:55:17.772272   11694 out.go:177] * Using Docker Desktop driver with root privileges
	I0718 21:55:17.793040   11694 cni.go:84] Creating CNI manager for ""
	I0718 21:55:17.793060   11694 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 21:55:17.793069   11694 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 21:55:17.793110   11694 start.go:340] cluster config:
	{Name:force-systemd-env-097000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-097000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:55:17.813924   11694 out.go:177] * Starting "force-systemd-env-097000" primary control-plane node in "force-systemd-env-097000" cluster
	I0718 21:55:17.855998   11694 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 21:55:17.877060   11694 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0718 21:55:17.919085   11694 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:55:17.919122   11694 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 21:55:17.919162   11694 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 21:55:17.919194   11694 cache.go:56] Caching tarball of preloaded images
	I0718 21:55:17.919411   11694 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:55:17.919432   11694 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:55:17.920291   11694 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/force-systemd-env-097000/config.json ...
	I0718 21:55:17.920501   11694 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/force-systemd-env-097000/config.json: {Name:mkff6610f34c257294f8326127bd9db74792e53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0718 21:55:17.944690   11694 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0718 21:55:17.944708   11694 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 21:55:17.944849   11694 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 21:55:17.944866   11694 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0718 21:55:17.944872   11694 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0718 21:55:17.944883   11694 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 21:55:17.944888   11694 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0718 21:55:17.947939   11694 image.go:273] response: 
	I0718 21:55:18.078104   11694 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0718 21:55:18.078169   11694 cache.go:194] Successfully downloaded all kic artifacts
	I0718 21:55:18.078216   11694 start.go:360] acquireMachinesLock for force-systemd-env-097000: {Name:mk62ce371ffbfda22fbe9762855ffa3bec9d32cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:55:18.078386   11694 start.go:364] duration metric: took 158.795µs to acquireMachinesLock for "force-systemd-env-097000"
	I0718 21:55:18.078413   11694 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-097000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-097000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:55:18.078479   11694 start.go:125] createHost starting for "" (driver="docker")
	I0718 21:55:18.100711   11694 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0718 21:55:18.100909   11694 start.go:159] libmachine.API.Create for "force-systemd-env-097000" (driver="docker")
	I0718 21:55:18.100941   11694 client.go:168] LocalClient.Create starting
	I0718 21:55:18.101040   11694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 21:55:18.101090   11694 main.go:141] libmachine: Decoding PEM data...
	I0718 21:55:18.101105   11694 main.go:141] libmachine: Parsing certificate...
	I0718 21:55:18.101158   11694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 21:55:18.101196   11694 main.go:141] libmachine: Decoding PEM data...
	I0718 21:55:18.101204   11694 main.go:141] libmachine: Parsing certificate...
	I0718 21:55:18.101711   11694 cli_runner.go:164] Run: docker network inspect force-systemd-env-097000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 21:55:18.119316   11694 cli_runner.go:211] docker network inspect force-systemd-env-097000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 21:55:18.119439   11694 network_create.go:284] running [docker network inspect force-systemd-env-097000] to gather additional debugging logs...
	I0718 21:55:18.119455   11694 cli_runner.go:164] Run: docker network inspect force-systemd-env-097000
	W0718 21:55:18.136691   11694 cli_runner.go:211] docker network inspect force-systemd-env-097000 returned with exit code 1
	I0718 21:55:18.136724   11694 network_create.go:287] error running [docker network inspect force-systemd-env-097000]: docker network inspect force-systemd-env-097000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-097000 not found
	I0718 21:55:18.136740   11694 network_create.go:289] output of [docker network inspect force-systemd-env-097000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-097000 not found
	
	** /stderr **
	I0718 21:55:18.136864   11694 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:55:18.155968   11694 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:55:18.157520   11694 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:55:18.159099   11694 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:55:18.159459   11694 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00151e580}
	I0718 21:55:18.159476   11694 network_create.go:124] attempt to create docker network force-systemd-env-097000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0718 21:55:18.159547   11694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-097000 force-systemd-env-097000
	W0718 21:55:18.176970   11694 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-097000 force-systemd-env-097000 returned with exit code 1
	W0718 21:55:18.177004   11694 network_create.go:149] failed to create docker network force-systemd-env-097000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-097000 force-systemd-env-097000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0718 21:55:18.177018   11694 network_create.go:116] failed to create docker network force-systemd-env-097000 192.168.76.0/24, will retry: subnet is taken
	I0718 21:55:18.178598   11694 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:55:18.178997   11694 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015c08c0}
	I0718 21:55:18.179015   11694 network_create.go:124] attempt to create docker network force-systemd-env-097000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0718 21:55:18.179085   11694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-097000 force-systemd-env-097000
	I0718 21:55:18.242920   11694 network_create.go:108] docker network force-systemd-env-097000 192.168.85.0/24 created
	I0718 21:55:18.242964   11694 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-097000" container
	I0718 21:55:18.243073   11694 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 21:55:18.262744   11694 cli_runner.go:164] Run: docker volume create force-systemd-env-097000 --label name.minikube.sigs.k8s.io=force-systemd-env-097000 --label created_by.minikube.sigs.k8s.io=true
	I0718 21:55:18.280871   11694 oci.go:103] Successfully created a docker volume force-systemd-env-097000
	I0718 21:55:18.280990   11694 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-097000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-097000 --entrypoint /usr/bin/test -v force-systemd-env-097000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 21:55:18.690848   11694 oci.go:107] Successfully prepared a docker volume force-systemd-env-097000
	I0718 21:55:18.690897   11694 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:55:18.690914   11694 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 21:55:18.691039   11694 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-097000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 22:01:18.100734   11694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:01:18.100888   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:01:18.121253   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:01:18.121382   11694 retry.go:31] will retry after 182.027149ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:18.303872   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:01:18.324006   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:01:18.324126   11694 retry.go:31] will retry after 275.420492ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:18.599972   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:01:18.619658   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:01:18.619751   11694 retry.go:31] will retry after 801.642265ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:19.422431   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:01:19.441798   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	W0718 22:01:19.441907   11694 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	
	W0718 22:01:19.441935   11694 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:19.442006   11694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:01:19.442078   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:01:19.459604   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:01:19.459696   11694 retry.go:31] will retry after 251.557346ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:19.712236   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:01:19.732049   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:01:19.732144   11694 retry.go:31] will retry after 493.773388ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:20.226278   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:01:20.246325   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:01:20.246417   11694 retry.go:31] will retry after 433.575016ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:20.681425   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:01:20.700776   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	W0718 22:01:20.700882   11694 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	
	W0718 22:01:20.700898   11694 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:20.700913   11694 start.go:128] duration metric: took 6m2.625057068s to createHost
	I0718 22:01:20.700921   11694 start.go:83] releasing machines lock for "force-systemd-env-097000", held for 6m2.625164168s
	W0718 22:01:20.700934   11694 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0718 22:01:20.701363   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:20.719030   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:20.719088   11694 delete.go:82] Unable to get host status for force-systemd-env-097000, assuming it has already been deleted: state: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	W0718 22:01:20.719181   11694 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0718 22:01:20.719189   11694 start.go:729] Will try again in 5 seconds ...
	I0718 22:01:25.721434   11694 start.go:360] acquireMachinesLock for force-systemd-env-097000: {Name:mk62ce371ffbfda22fbe9762855ffa3bec9d32cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 22:01:25.721670   11694 start.go:364] duration metric: took 186.594µs to acquireMachinesLock for "force-systemd-env-097000"
	I0718 22:01:25.721709   11694 start.go:96] Skipping create...Using existing machine configuration
	I0718 22:01:25.721727   11694 fix.go:54] fixHost starting: 
	I0718 22:01:25.722214   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:25.741333   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:25.741377   11694 fix.go:112] recreateIfNeeded on force-systemd-env-097000: state= err=unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:25.741399   11694 fix.go:117] machineExists: false. err=machine does not exist
	I0718 22:01:25.763274   11694 out.go:177] * docker "force-systemd-env-097000" container is missing, will recreate.
	I0718 22:01:25.805859   11694 delete.go:124] DEMOLISHING force-systemd-env-097000 ...
	I0718 22:01:25.806087   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:25.824761   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	W0718 22:01:25.824806   11694 stop.go:83] unable to get state: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:25.824826   11694 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:25.825207   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:25.842322   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:25.842377   11694 delete.go:82] Unable to get host status for force-systemd-env-097000, assuming it has already been deleted: state: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:25.842459   11694 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-097000
	W0718 22:01:25.859376   11694 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-097000 returned with exit code 1
	I0718 22:01:25.859423   11694 kic.go:371] could not find the container force-systemd-env-097000 to remove it. will try anyways
	I0718 22:01:25.859507   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:25.876525   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	W0718 22:01:25.876588   11694 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:25.876673   11694 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-097000 /bin/bash -c "sudo init 0"
	W0718 22:01:25.893478   11694 cli_runner.go:211] docker exec --privileged -t force-systemd-env-097000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 22:01:25.893509   11694 oci.go:650] error shutdown force-systemd-env-097000: docker exec --privileged -t force-systemd-env-097000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:26.893985   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:26.914106   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:26.914172   11694 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:26.914183   11694 oci.go:664] temporary error: container force-systemd-env-097000 status is  but expect it to be exited
	I0718 22:01:26.914208   11694 retry.go:31] will retry after 265.744932ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:27.181192   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:27.202704   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:27.202759   11694 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:27.202767   11694 oci.go:664] temporary error: container force-systemd-env-097000 status is  but expect it to be exited
	I0718 22:01:27.202798   11694 retry.go:31] will retry after 375.724221ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:27.580936   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:27.600919   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:27.600991   11694 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:27.601004   11694 oci.go:664] temporary error: container force-systemd-env-097000 status is  but expect it to be exited
	I0718 22:01:27.601027   11694 retry.go:31] will retry after 1.633504475s: couldn't verify container is exited. %v: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:29.236332   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:29.256730   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:29.256795   11694 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:29.256805   11694 oci.go:664] temporary error: container force-systemd-env-097000 status is  but expect it to be exited
	I0718 22:01:29.256827   11694 retry.go:31] will retry after 1.142092102s: couldn't verify container is exited. %v: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:30.399197   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:30.417837   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:30.417887   11694 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:30.417900   11694 oci.go:664] temporary error: container force-systemd-env-097000 status is  but expect it to be exited
	I0718 22:01:30.417925   11694 retry.go:31] will retry after 1.779585635s: couldn't verify container is exited. %v: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:32.198940   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:32.219127   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:32.219180   11694 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:32.219197   11694 oci.go:664] temporary error: container force-systemd-env-097000 status is  but expect it to be exited
	I0718 22:01:32.219227   11694 retry.go:31] will retry after 3.800632401s: couldn't verify container is exited. %v: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:36.022078   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:36.042466   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:36.042523   11694 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:36.042532   11694 oci.go:664] temporary error: container force-systemd-env-097000 status is  but expect it to be exited
	I0718 22:01:36.042556   11694 retry.go:31] will retry after 7.452275034s: couldn't verify container is exited. %v: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:43.495113   11694 cli_runner.go:164] Run: docker container inspect force-systemd-env-097000 --format={{.State.Status}}
	W0718 22:01:43.514930   11694 cli_runner.go:211] docker container inspect force-systemd-env-097000 --format={{.State.Status}} returned with exit code 1
	I0718 22:01:43.514983   11694 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:01:43.514993   11694 oci.go:664] temporary error: container force-systemd-env-097000 status is  but expect it to be exited
	I0718 22:01:43.515026   11694 oci.go:88] couldn't shut down force-systemd-env-097000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	 
	I0718 22:01:43.515109   11694 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-097000
	I0718 22:01:43.533417   11694 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-097000
	W0718 22:01:43.551172   11694 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-097000 returned with exit code 1
	I0718 22:01:43.551296   11694 cli_runner.go:164] Run: docker network inspect force-systemd-env-097000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:01:43.569251   11694 cli_runner.go:164] Run: docker network rm force-systemd-env-097000
	I0718 22:01:43.646465   11694 fix.go:124] Sleeping 1 second for extra luck!
	I0718 22:01:44.648223   11694 start.go:125] createHost starting for "" (driver="docker")
	I0718 22:01:44.672164   11694 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0718 22:01:44.672352   11694 start.go:159] libmachine.API.Create for "force-systemd-env-097000" (driver="docker")
	I0718 22:01:44.672382   11694 client.go:168] LocalClient.Create starting
	I0718 22:01:44.672602   11694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 22:01:44.672714   11694 main.go:141] libmachine: Decoding PEM data...
	I0718 22:01:44.672740   11694 main.go:141] libmachine: Parsing certificate...
	I0718 22:01:44.672827   11694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 22:01:44.672918   11694 main.go:141] libmachine: Decoding PEM data...
	I0718 22:01:44.672934   11694 main.go:141] libmachine: Parsing certificate...
	I0718 22:01:44.692578   11694 cli_runner.go:164] Run: docker network inspect force-systemd-env-097000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 22:01:44.712406   11694 cli_runner.go:211] docker network inspect force-systemd-env-097000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 22:01:44.712510   11694 network_create.go:284] running [docker network inspect force-systemd-env-097000] to gather additional debugging logs...
	I0718 22:01:44.712528   11694 cli_runner.go:164] Run: docker network inspect force-systemd-env-097000
	W0718 22:01:44.729840   11694 cli_runner.go:211] docker network inspect force-systemd-env-097000 returned with exit code 1
	I0718 22:01:44.729871   11694 network_create.go:287] error running [docker network inspect force-systemd-env-097000]: docker network inspect force-systemd-env-097000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-097000 not found
	I0718 22:01:44.729889   11694 network_create.go:289] output of [docker network inspect force-systemd-env-097000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-097000 not found
	
	** /stderr **
	I0718 22:01:44.730058   11694 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 22:01:44.749971   11694 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:44.751551   11694 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:44.753279   11694 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:44.754850   11694 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:44.756386   11694 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:44.757974   11694 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 22:01:44.758364   11694 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001591110}
	I0718 22:01:44.758377   11694 network_create.go:124] attempt to create docker network force-systemd-env-097000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0718 22:01:44.758462   11694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-097000 force-systemd-env-097000
	I0718 22:01:44.822158   11694 network_create.go:108] docker network force-systemd-env-097000 192.168.103.0/24 created
	I0718 22:01:44.822198   11694 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-097000" container
	I0718 22:01:44.822308   11694 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 22:01:44.841729   11694 cli_runner.go:164] Run: docker volume create force-systemd-env-097000 --label name.minikube.sigs.k8s.io=force-systemd-env-097000 --label created_by.minikube.sigs.k8s.io=true
	I0718 22:01:44.859021   11694 oci.go:103] Successfully created a docker volume force-systemd-env-097000
	I0718 22:01:44.859149   11694 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-097000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-097000 --entrypoint /usr/bin/test -v force-systemd-env-097000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 22:01:45.143287   11694 oci.go:107] Successfully prepared a docker volume force-systemd-env-097000
	I0718 22:01:45.143324   11694 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 22:01:45.143338   11694 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 22:01:45.143444   11694 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-097000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 22:07:44.669966   11694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:07:44.670127   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:44.688763   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:44.688857   11694 retry.go:31] will retry after 210.50767ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:44.900652   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:44.919836   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:44.919928   11694 retry.go:31] will retry after 259.765911ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:45.181264   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:45.200922   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:45.201012   11694 retry.go:31] will retry after 839.580431ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:46.040896   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:46.059523   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	W0718 22:07:46.059629   11694 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	
	W0718 22:07:46.059648   11694 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:46.059714   11694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:07:46.059768   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:46.077361   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:46.077478   11694 retry.go:31] will retry after 294.852374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:46.374739   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:46.394399   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:46.394506   11694 retry.go:31] will retry after 388.265335ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:46.783828   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:46.803294   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:46.803410   11694 retry.go:31] will retry after 291.641058ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:47.097181   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:47.117698   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:47.117807   11694 retry.go:31] will retry after 506.267519ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:47.626428   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:47.646947   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	W0718 22:07:47.647060   11694 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	
	W0718 22:07:47.647073   11694 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:47.647083   11694 start.go:128] duration metric: took 6m3.001465696s to createHost
	I0718 22:07:47.647166   11694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 22:07:47.647230   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:47.664888   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:47.664979   11694 retry.go:31] will retry after 144.974474ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:47.811658   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:47.831634   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:47.831730   11694 retry.go:31] will retry after 496.996419ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:48.329103   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:48.349220   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:48.349314   11694 retry.go:31] will retry after 343.71791ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:48.695463   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:48.715622   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:48.715715   11694 retry.go:31] will retry after 688.56027ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:49.406678   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:49.426680   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	W0718 22:07:49.426779   11694 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	
	W0718 22:07:49.426794   11694 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:49.426866   11694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 22:07:49.426928   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:49.444752   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:49.444849   11694 retry.go:31] will retry after 240.729732ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:49.685854   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:49.703933   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:49.704043   11694 retry.go:31] will retry after 369.817105ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:50.076329   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:50.095926   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	I0718 22:07:50.096018   11694 retry.go:31] will retry after 833.517327ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:50.931835   11694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000
	W0718 22:07:50.951027   11694 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000 returned with exit code 1
	W0718 22:07:50.951129   11694 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	
	W0718 22:07:50.951143   11694 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-097000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-097000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	I0718 22:07:50.951157   11694 fix.go:56] duration metric: took 6m25.232256265s for fixHost
	I0718 22:07:50.951166   11694 start.go:83] releasing machines lock for "force-systemd-env-097000", held for 6m25.232306248s
	W0718 22:07:50.951241   11694 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-097000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-097000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0718 22:07:50.993938   11694 out.go:177] 
	W0718 22:07:51.015234   11694 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0718 22:07:51.015276   11694 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0718 22:07:51.015324   11694 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0718 22:07:51.040817   11694 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-097000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-097000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-097000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (177.385992ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-097000 host status: state: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-097000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-18 22:07:51.31958 -0700 PDT m=+6172.882463313
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-097000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-097000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-097000",
	        "Id": "3f1338ff40df8fe527a0ae8d9c14140a0db9b79140066d92566875e98d70f069",
	        "Created": "2024-07-19T05:01:44.773927184Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-097000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-097000 -n force-systemd-env-097000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-097000 -n force-systemd-env-097000: exit status 7 (73.690429ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 22:07:51.413978   12343 status.go:249] status error: host: state: unknown state "force-systemd-env-097000": docker container inspect force-systemd-env-097000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-097000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-097000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-097000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-097000
--- FAIL: TestForceSystemdEnv (754.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (872.97s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-901000 ssh -- ls /minikube-host
E0718 20:54:32.591772    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:55:44.214621    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:57:07.260934    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:59:32.586732    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:00:44.209952    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:04:32.601148    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:05:44.224963    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-901000 ssh -- ls /minikube-host: signal: killed (14m32.704146585s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-901000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-901000
helpers_test.go:235: (dbg) docker inspect mount-start-2-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9942f1b2c987ed84035f0ef28a6edede00a406821c5eea5180b884d3dafc6274",
	        "Created": "2024-07-19T03:52:12.092491346Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 131848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-19T03:52:22.539736347Z",
	            "FinishedAt": "2024-07-19T03:52:20.428241121Z"
	        },
	        "Image": "sha256:7bda27423b38cbebec7632cdf15a8fcb063ff209d17af249e6b3f1fbdb5fa681",
	        "ResolvConfPath": "/var/lib/docker/containers/9942f1b2c987ed84035f0ef28a6edede00a406821c5eea5180b884d3dafc6274/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9942f1b2c987ed84035f0ef28a6edede00a406821c5eea5180b884d3dafc6274/hostname",
	        "HostsPath": "/var/lib/docker/containers/9942f1b2c987ed84035f0ef28a6edede00a406821c5eea5180b884d3dafc6274/hosts",
	        "LogPath": "/var/lib/docker/containers/9942f1b2c987ed84035f0ef28a6edede00a406821c5eea5180b884d3dafc6274/9942f1b2c987ed84035f0ef28a6edede00a406821c5eea5180b884d3dafc6274-json.log",
	        "Name": "/mount-start-2-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-901000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/71b12e4beee97e441d1a81c4a4b3d67b6fed362eebc701e5e121a6b8abe301c8-init/diff:/var/lib/docker/overlay2/2e2e62bc1081d5f7cf42750694dae24c89d27e8dd184c9b1c80a7da69faaf085/diff",
	                "MergedDir": "/var/lib/docker/overlay2/71b12e4beee97e441d1a81c4a4b3d67b6fed362eebc701e5e121a6b8abe301c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/71b12e4beee97e441d1a81c4a4b3d67b6fed362eebc701e5e121a6b8abe301c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/71b12e4beee97e441d1a81c4a4b3d67b6fed362eebc701e5e121a6b8abe301c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-901000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-901000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-901000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bae6284671850f398de0161576f32a54177ab475d0b24ba046453c2e5979bd23",
	            "SandboxKey": "/var/run/docker/netns/bae628467185",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51830"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51826"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51827"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51828"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51829"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2a12f23e261045bc7ad8823b6fcfe3a38eb72c877a27b8958937f5abb516b48e",
	                    "EndpointID": "0b5f6bb153155a1c2fe46a35bad36f1f183da07ef0b563bc26a420acfda1a99f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "mount-start-2-901000",
	                        "9942f1b2c987"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-901000 -n mount-start-2-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-901000 -n mount-start-2-901000: exit status 6 (240.393505ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:07:03.126203    9000 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-901000" does not appear in /Users/jenkins/minikube-integration/19302-1453/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-901000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (872.97s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (756.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-409000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0718 21:09:32.594658    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:10:44.217774    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:13:47.261582    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:14:32.588132    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:15:44.211754    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:19:32.596633    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:20:44.220861    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-409000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m35.995211717s)

                                                
                                                
-- stdout --
	* [multinode-409000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-409000" primary control-plane node in "multinode-409000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-409000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:08:10.829638    9072 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:08:10.829807    9072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:08:10.829812    9072 out.go:304] Setting ErrFile to fd 2...
	I0718 21:08:10.829816    9072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:08:10.829990    9072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:08:10.831490    9072 out.go:298] Setting JSON to false
	I0718 21:08:10.854118    9072 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4063,"bootTime":1721358027,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 21:08:10.854210    9072 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:08:10.875887    9072 out.go:177] * [multinode-409000] minikube v1.33.1 on Darwin 14.5
	I0718 21:08:10.917779    9072 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:08:10.917805    9072 notify.go:220] Checking for updates...
	I0718 21:08:10.960463    9072 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 21:08:10.981679    9072 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 21:08:11.002724    9072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:08:11.023604    9072 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	I0718 21:08:11.044673    9072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:08:11.066031    9072 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:08:11.090085    9072 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 21:08:11.090347    9072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:08:11.169456    9072 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-19 04:08:11.16093956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-
g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-des
ktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugi
ns/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:08:11.212040    9072 out.go:177] * Using the docker driver based on user configuration
	I0718 21:08:11.233189    9072 start.go:297] selected driver: docker
	I0718 21:08:11.233212    9072 start.go:901] validating driver "docker" against <nil>
	I0718 21:08:11.233229    9072 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:08:11.237665    9072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:08:11.316981    9072 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-19 04:08:11.309162277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:08:11.317133    9072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:08:11.317339    9072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:08:11.338983    9072 out.go:177] * Using Docker Desktop driver with root privileges
	I0718 21:08:11.360061    9072 cni.go:84] Creating CNI manager for ""
	I0718 21:08:11.360093    9072 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0718 21:08:11.360107    9072 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0718 21:08:11.360216    9072 start.go:340] cluster config:
	{Name:multinode-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:08:11.382156    9072 out.go:177] * Starting "multinode-409000" primary control-plane node in "multinode-409000" cluster
	I0718 21:08:11.424050    9072 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 21:08:11.445242    9072 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0718 21:08:11.487070    9072 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:08:11.487122    9072 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 21:08:11.487147    9072 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 21:08:11.487167    9072 cache.go:56] Caching tarball of preloaded images
	I0718 21:08:11.487386    9072 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:08:11.487405    9072 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:08:11.488903    9072 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/multinode-409000/config.json ...
	I0718 21:08:11.489022    9072 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/multinode-409000/config.json: {Name:mk7464cf6c2ec302f72b4e51250acf5f8e430500 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0718 21:08:11.512326    9072 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0718 21:08:11.512365    9072 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 21:08:11.512550    9072 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 21:08:11.512572    9072 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0718 21:08:11.512579    9072 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0718 21:08:11.512589    9072 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 21:08:11.512594    9072 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0718 21:08:11.515425    9072 image.go:273] response: 
	I0718 21:08:11.652596    9072 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0718 21:08:11.652647    9072 cache.go:194] Successfully downloaded all kic artifacts
	I0718 21:08:11.652697    9072 start.go:360] acquireMachinesLock for multinode-409000: {Name:mkbdc3ca6460cbeb89ccd0dcec6987ecea99db54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:08:11.652872    9072 start.go:364] duration metric: took 162.216µs to acquireMachinesLock for "multinode-409000"
	I0718 21:08:11.652900    9072 start.go:93] Provisioning new machine with config: &{Name:multinode-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-409000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:08:11.652966    9072 start.go:125] createHost starting for "" (driver="docker")
	I0718 21:08:11.695372    9072 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 21:08:11.695568    9072 start.go:159] libmachine.API.Create for "multinode-409000" (driver="docker")
	I0718 21:08:11.695598    9072 client.go:168] LocalClient.Create starting
	I0718 21:08:11.695716    9072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 21:08:11.695768    9072 main.go:141] libmachine: Decoding PEM data...
	I0718 21:08:11.695784    9072 main.go:141] libmachine: Parsing certificate...
	I0718 21:08:11.695833    9072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 21:08:11.695871    9072 main.go:141] libmachine: Decoding PEM data...
	I0718 21:08:11.695879    9072 main.go:141] libmachine: Parsing certificate...
	I0718 21:08:11.696361    9072 cli_runner.go:164] Run: docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 21:08:11.713725    9072 cli_runner.go:211] docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 21:08:11.713822    9072 network_create.go:284] running [docker network inspect multinode-409000] to gather additional debugging logs...
	I0718 21:08:11.713841    9072 cli_runner.go:164] Run: docker network inspect multinode-409000
	W0718 21:08:11.730771    9072 cli_runner.go:211] docker network inspect multinode-409000 returned with exit code 1
	I0718 21:08:11.730800    9072 network_create.go:287] error running [docker network inspect multinode-409000]: docker network inspect multinode-409000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-409000 not found
	I0718 21:08:11.730823    9072 network_create.go:289] output of [docker network inspect multinode-409000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-409000 not found
	
	** /stderr **
	I0718 21:08:11.730957    9072 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:08:11.750527    9072 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:08:11.752141    9072 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:08:11.752514    9072 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001583090}
	I0718 21:08:11.752535    9072 network_create.go:124] attempt to create docker network multinode-409000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0718 21:08:11.752609    9072 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000
	I0718 21:08:11.815845    9072 network_create.go:108] docker network multinode-409000 192.168.67.0/24 created
	I0718 21:08:11.815881    9072 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-409000" container
	I0718 21:08:11.815995    9072 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 21:08:11.833718    9072 cli_runner.go:164] Run: docker volume create multinode-409000 --label name.minikube.sigs.k8s.io=multinode-409000 --label created_by.minikube.sigs.k8s.io=true
	I0718 21:08:11.852345    9072 oci.go:103] Successfully created a docker volume multinode-409000
	I0718 21:08:11.852475    9072 cli_runner.go:164] Run: docker run --rm --name multinode-409000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-409000 --entrypoint /usr/bin/test -v multinode-409000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 21:08:12.268660    9072 oci.go:107] Successfully prepared a docker volume multinode-409000
	I0718 21:08:12.268699    9072 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:08:12.268714    9072 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 21:08:12.268859    9072 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-409000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 21:14:11.690265    9072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 21:14:11.690415    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:14:11.710008    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:14:11.710134    9072 retry.go:31] will retry after 320.976703ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:12.031468    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:14:12.051361    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:14:12.051475    9072 retry.go:31] will retry after 496.756838ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:12.550100    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:14:12.569568    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:14:12.569658    9072 retry.go:31] will retry after 332.935338ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:12.902882    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:14:12.922605    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:14:12.922701    9072 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:14:12.922733    9072 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:12.922804    9072 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 21:14:12.922867    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:14:12.939799    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:14:12.939893    9072 retry.go:31] will retry after 268.199751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:13.210517    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:14:13.230245    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:14:13.230346    9072 retry.go:31] will retry after 256.678599ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:13.489472    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:14:13.508862    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:14:13.508960    9072 retry.go:31] will retry after 789.862185ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:14.300302    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:14:14.318976    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:14:14.319078    9072 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:14:14.319100    9072 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:14.319112    9072 start.go:128] duration metric: took 6m2.674002704s to createHost
	I0718 21:14:14.319119    9072 start.go:83] releasing machines lock for "multinode-409000", held for 6m2.674106243s
	W0718 21:14:14.319134    9072 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0718 21:14:14.319562    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:14.336562    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:14.336620    9072 delete.go:82] Unable to get host status for multinode-409000, assuming it has already been deleted: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	W0718 21:14:14.336721    9072 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0718 21:14:14.336730    9072 start.go:729] Will try again in 5 seconds ...
	I0718 21:14:19.338051    9072 start.go:360] acquireMachinesLock for multinode-409000: {Name:mkbdc3ca6460cbeb89ccd0dcec6987ecea99db54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:14:19.338250    9072 start.go:364] duration metric: took 160.049µs to acquireMachinesLock for "multinode-409000"
	I0718 21:14:19.338285    9072 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:14:19.338304    9072 fix.go:54] fixHost starting: 
	I0718 21:14:19.338804    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:19.359427    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:19.359474    9072 fix.go:112] recreateIfNeeded on multinode-409000: state= err=unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:19.359498    9072 fix.go:117] machineExists: false. err=machine does not exist
	I0718 21:14:19.401940    9072 out.go:177] * docker "multinode-409000" container is missing, will recreate.
	I0718 21:14:19.423189    9072 delete.go:124] DEMOLISHING multinode-409000 ...
	I0718 21:14:19.423409    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:19.441588    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	W0718 21:14:19.441631    9072 stop.go:83] unable to get state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:19.441653    9072 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:19.442015    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:19.458832    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:19.458881    9072 delete.go:82] Unable to get host status for multinode-409000, assuming it has already been deleted: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:19.458957    9072 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-409000
	W0718 21:14:19.475903    9072 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-409000 returned with exit code 1
	I0718 21:14:19.475938    9072 kic.go:371] could not find the container multinode-409000 to remove it. will try anyways
	I0718 21:14:19.476022    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:19.492745    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	W0718 21:14:19.492802    9072 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:19.492896    9072 cli_runner.go:164] Run: docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0"
	W0718 21:14:19.510618    9072 cli_runner.go:211] docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 21:14:19.510645    9072 oci.go:650] error shutdown multinode-409000: docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:20.510960    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:20.530413    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:20.530467    9072 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:20.530479    9072 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:14:20.530507    9072 retry.go:31] will retry after 381.917865ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:20.913151    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:20.932377    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:20.932420    9072 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:20.932431    9072 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:14:20.932455    9072 retry.go:31] will retry after 789.042094ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:21.723910    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:21.743342    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:21.743390    9072 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:21.743398    9072 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:14:21.743421    9072 retry.go:31] will retry after 1.447746463s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:23.191634    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:23.211163    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:23.211208    9072 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:23.211227    9072 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:14:23.211250    9072 retry.go:31] will retry after 2.502878206s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:25.714706    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:25.734986    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:25.735032    9072 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:25.735046    9072 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:14:25.735077    9072 retry.go:31] will retry after 2.138142595s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:27.873441    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:27.892609    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:27.892650    9072 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:27.892660    9072 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:14:27.892685    9072 retry.go:31] will retry after 3.670456988s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:31.565390    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:31.585313    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:31.585370    9072 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:31.585386    9072 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:14:31.585410    9072 retry.go:31] will retry after 7.814457613s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:39.400226    9072 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:14:39.419355    9072 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:14:39.419415    9072 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:14:39.419430    9072 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:14:39.419468    9072 oci.go:88] couldn't shut down multinode-409000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	 
	I0718 21:14:39.419553    9072 cli_runner.go:164] Run: docker rm -f -v multinode-409000
	I0718 21:14:39.437187    9072 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-409000
	W0718 21:14:39.455086    9072 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-409000 returned with exit code 1
	I0718 21:14:39.455199    9072 cli_runner.go:164] Run: docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:14:39.473396    9072 cli_runner.go:164] Run: docker network rm multinode-409000
	I0718 21:14:39.560036    9072 fix.go:124] Sleeping 1 second for extra luck!
	I0718 21:14:40.561171    9072 start.go:125] createHost starting for "" (driver="docker")
	I0718 21:14:40.583527    9072 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 21:14:40.583703    9072 start.go:159] libmachine.API.Create for "multinode-409000" (driver="docker")
	I0718 21:14:40.583738    9072 client.go:168] LocalClient.Create starting
	I0718 21:14:40.583950    9072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 21:14:40.584051    9072 main.go:141] libmachine: Decoding PEM data...
	I0718 21:14:40.584079    9072 main.go:141] libmachine: Parsing certificate...
	I0718 21:14:40.584159    9072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 21:14:40.584236    9072 main.go:141] libmachine: Decoding PEM data...
	I0718 21:14:40.584253    9072 main.go:141] libmachine: Parsing certificate...
	I0718 21:14:40.584997    9072 cli_runner.go:164] Run: docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 21:14:40.603843    9072 cli_runner.go:211] docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 21:14:40.603937    9072 network_create.go:284] running [docker network inspect multinode-409000] to gather additional debugging logs...
	I0718 21:14:40.603954    9072 cli_runner.go:164] Run: docker network inspect multinode-409000
	W0718 21:14:40.620855    9072 cli_runner.go:211] docker network inspect multinode-409000 returned with exit code 1
	I0718 21:14:40.620882    9072 network_create.go:287] error running [docker network inspect multinode-409000]: docker network inspect multinode-409000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-409000 not found
	I0718 21:14:40.620899    9072 network_create.go:289] output of [docker network inspect multinode-409000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-409000 not found
	
	** /stderr **
	I0718 21:14:40.621046    9072 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:14:40.639616    9072 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:14:40.641344    9072 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:14:40.643058    9072 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:14:40.643513    9072 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015836c0}
	I0718 21:14:40.643530    9072 network_create.go:124] attempt to create docker network multinode-409000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0718 21:14:40.643622    9072 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000
	W0718 21:14:40.661244    9072 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000 returned with exit code 1
	W0718 21:14:40.661289    9072 network_create.go:149] failed to create docker network multinode-409000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0718 21:14:40.661306    9072 network_create.go:116] failed to create docker network multinode-409000 192.168.76.0/24, will retry: subnet is taken
	I0718 21:14:40.662701    9072 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:14:40.663159    9072 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00147f400}
	I0718 21:14:40.663173    9072 network_create.go:124] attempt to create docker network multinode-409000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0718 21:14:40.663242    9072 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000
	I0718 21:14:40.726959    9072 network_create.go:108] docker network multinode-409000 192.168.85.0/24 created
	I0718 21:14:40.726997    9072 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-409000" container
	I0718 21:14:40.727111    9072 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 21:14:40.745027    9072 cli_runner.go:164] Run: docker volume create multinode-409000 --label name.minikube.sigs.k8s.io=multinode-409000 --label created_by.minikube.sigs.k8s.io=true
	I0718 21:14:40.761971    9072 oci.go:103] Successfully created a docker volume multinode-409000
	I0718 21:14:40.762112    9072 cli_runner.go:164] Run: docker run --rm --name multinode-409000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-409000 --entrypoint /usr/bin/test -v multinode-409000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 21:14:41.022064    9072 oci.go:107] Successfully prepared a docker volume multinode-409000
	I0718 21:14:41.022099    9072 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:14:41.022112    9072 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 21:14:41.022209    9072 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-409000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 21:20:40.593778    9072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 21:20:40.593915    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:40.613359    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:40.613475    9072 retry.go:31] will retry after 223.961424ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:40.837896    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:40.858602    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:40.858710    9072 retry.go:31] will retry after 521.394838ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:41.381098    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:41.400524    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:41.400620    9072 retry.go:31] will retry after 611.662716ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:42.014491    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:42.034279    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:20:42.034395    9072 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:20:42.034416    9072 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:42.034480    9072 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 21:20:42.034552    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:42.051922    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:42.052014    9072 retry.go:31] will retry after 169.431286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:42.223764    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:42.243283    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:42.243383    9072 retry.go:31] will retry after 187.71949ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:42.431449    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:42.451000    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:42.451091    9072 retry.go:31] will retry after 351.90835ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:42.805435    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:42.824773    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:42.824865    9072 retry.go:31] will retry after 578.547543ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:43.404261    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:43.423090    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:20:43.423197    9072 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:20:43.423215    9072 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:43.423232    9072 start.go:128] duration metric: took 6m2.854380897s to createHost
	I0718 21:20:43.423301    9072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 21:20:43.423356    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:43.440931    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:43.441027    9072 retry.go:31] will retry after 313.000743ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:43.755450    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:43.774981    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:43.775071    9072 retry.go:31] will retry after 497.438825ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:44.274985    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:44.294980    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:44.295073    9072 retry.go:31] will retry after 715.735354ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:45.012232    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:45.055958    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:20:45.056065    9072 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:20:45.056080    9072 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:45.056141    9072 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 21:20:45.056198    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:45.073150    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:45.073267    9072 retry.go:31] will retry after 156.557086ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:45.232235    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:45.252121    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:45.252214    9072 retry.go:31] will retry after 200.328137ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:45.453460    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:45.473118    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:45.473212    9072 retry.go:31] will retry after 366.253357ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:45.841867    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:45.861832    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:20:45.861954    9072 retry.go:31] will retry after 717.387924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:46.580383    9072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:20:46.599965    9072 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:20:46.600070    9072 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:20:46.600088    9072 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:20:46.600098    9072 fix.go:56] duration metric: took 6m27.254798133s for fixHost
	I0718 21:20:46.600104    9072 start.go:83] releasing machines lock for "multinode-409000", held for 6m27.254842026s
	W0718 21:20:46.600181    9072 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-409000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-409000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0718 21:20:46.643596    9072 out.go:177] 
	W0718 21:20:46.666618    9072 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0718 21:20:46.666685    9072 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0718 21:20:46.666727    9072 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0718 21:20:46.687628    9072 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-409000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "83ec88f1a5b059b6c0cdc24a83ac4d67116954f7eb533ea93561cd307a70d602",
	        "Created": "2024-07-19T04:14:40.679070844Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (73.391249ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:20:46.876794    9511 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (756.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (87.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (99.607522ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-409000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- rollout status deployment/busybox: exit status 1 (98.646596ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.952571ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.553013ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.808716ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.673963ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.956285ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.269435ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.743716ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.942312ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.758409ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.370321ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (98.507449ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- exec  -- nslookup kubernetes.io: exit status 1 (99.778487ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- exec  -- nslookup kubernetes.default: exit status 1 (98.357145ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (97.837137ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "83ec88f1a5b059b6c0cdc24a83ac4d67116954f7eb533ea93561cd307a70d602",
	        "Created": "2024-07-19T04:14:40.679070844Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (72.934671ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:22:14.105418    9597 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (87.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-409000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (97.916705ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-409000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "83ec88f1a5b059b6c0cdc24a83ac4d67116954f7eb533ea93561cd307a70d602",
	        "Created": "2024-07-19T04:14:40.679070844Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (73.593487ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:22:14.297882    9604 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-409000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-409000 -v 3 --alsologtostderr: exit status 80 (159.673009ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:14.352843    9607 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:14.353130    9607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:14.353136    9607 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:14.353140    9607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:14.353313    9607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:14.353653    9607 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:14.353921    9607 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:14.354294    9607 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:14.371128    9607 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:14.393667    9607 out.go:177] 
	W0718 21:22:14.415142    9607 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-409000 host status: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-409000 host status: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	I0718 21:22:14.435791    9607 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-409000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "83ec88f1a5b059b6c0cdc24a83ac4d67116954f7eb533ea93561cd307a70d602",
	        "Created": "2024-07-19T04:14:40.679070844Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (72.393654ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:22:14.551688    9611 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-409000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-409000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (37.338892ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-409000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-409000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-409000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "83ec88f1a5b059b6c0cdc24a83ac4d67116954f7eb533ea93561cd307a70d602",
	        "Created": "2024-07-19T04:14:40.679070844Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (72.660361ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:22:14.683010    9616 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-409000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-901000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-409000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-409000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-409000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "83ec88f1a5b059b6c0cdc24a83ac4d67116954f7eb533ea93561cd307a70d602",
	        "Created": "2024-07-19T04:14:40.679070844Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (73.795005ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:22:14.891981    9624 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status --output json --alsologtostderr: exit status 7 (72.739264ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-409000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:14.946252    9627 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:14.946514    9627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:14.946520    9627 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:14.946523    9627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:14.946695    9627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:14.946876    9627 out.go:298] Setting JSON to true
	I0718 21:22:14.946897    9627 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:14.946940    9627 notify.go:220] Checking for updates...
	I0718 21:22:14.947179    9627 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:14.947196    9627 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:14.947632    9627 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:14.964789    9627 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:14.964845    9627 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:14.964872    9627 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:14.964894    9627 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:14.964906    9627 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-409000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "83ec88f1a5b059b6c0cdc24a83ac4d67116954f7eb533ea93561cd307a70d602",
	        "Created": "2024-07-19T04:14:40.679070844Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (74.508456ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:22:15.125021    9631 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 node stop m03: exit status 85 (148.146893ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-409000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status: exit status 7 (74.915471ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:22:15.348829    9636 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:15.348840    9636 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr: exit status 7 (72.885444ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:15.402959    9639 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:15.403138    9639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:15.403144    9639 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:15.403147    9639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:15.403326    9639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:15.403492    9639 out.go:298] Setting JSON to false
	I0718 21:22:15.403515    9639 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:15.403554    9639 notify.go:220] Checking for updates...
	I0718 21:22:15.403786    9639 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:15.403800    9639 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:15.404202    9639 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:15.421707    9639 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:15.421781    9639 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:15.421802    9639 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:15.421825    9639 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:15.421833    9639 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr": multinode-409000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr": multinode-409000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr": multinode-409000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "83ec88f1a5b059b6c0cdc24a83ac4d67116954f7eb533ea93561cd307a70d602",
	        "Created": "2024-07-19T04:14:40.679070844Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (72.041108ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:22:15.515054    9643 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 node start m03 -v=7 --alsologtostderr: exit status 85 (147.166369ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:15.570431    9646 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:15.570815    9646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:15.570821    9646 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:15.570826    9646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:15.571001    9646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:15.571336    9646 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:15.571597    9646 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:15.593873    9646 out.go:177] 
	W0718 21:22:15.614870    9646 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0718 21:22:15.614894    9646 out.go:239] * 
	* 
	W0718 21:22:15.619046    9646 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:22:15.640593    9646 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0718 21:22:15.570431    9646 out.go:291] Setting OutFile to fd 1 ...
I0718 21:22:15.570815    9646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 21:22:15.570821    9646 out.go:304] Setting ErrFile to fd 2...
I0718 21:22:15.570826    9646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 21:22:15.571001    9646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
I0718 21:22:15.571336    9646 mustload.go:65] Loading cluster: multinode-409000
I0718 21:22:15.571597    9646 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 21:22:15.593873    9646 out.go:177] 
W0718 21:22:15.614870    9646 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0718 21:22:15.614894    9646 out.go:239] * 
* 
W0718 21:22:15.619046    9646 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0718 21:22:15.640593    9646 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-409000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr: exit status 7 (73.247223ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:15.717107    9648 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:15.717291    9648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:15.717296    9648 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:15.717300    9648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:15.717467    9648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:15.717644    9648 out.go:298] Setting JSON to false
	I0718 21:22:15.717666    9648 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:15.717704    9648 notify.go:220] Checking for updates...
	I0718 21:22:15.717964    9648 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:15.717979    9648 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:15.718383    9648 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:15.735783    9648 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:15.735841    9648 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:15.735866    9648 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:15.735887    9648 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:15.735897    9648 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr: exit status 7 (79.785215ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:16.870744    9651 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:16.871042    9651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:16.871048    9651 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:16.871065    9651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:16.871266    9651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:16.871462    9651 out.go:298] Setting JSON to false
	I0718 21:22:16.871496    9651 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:16.871555    9651 notify.go:220] Checking for updates...
	I0718 21:22:16.872743    9651 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:16.872761    9651 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:16.873138    9651 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:16.891356    9651 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:16.891437    9651 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:16.891458    9651 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:16.891479    9651 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:16.891486    9651 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr: exit status 7 (79.797114ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:18.091726    9654 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:18.091908    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:18.091912    9654 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:18.091916    9654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:18.092091    9654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:18.092272    9654 out.go:298] Setting JSON to false
	I0718 21:22:18.092294    9654 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:18.092332    9654 notify.go:220] Checking for updates...
	I0718 21:22:18.092555    9654 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:18.092571    9654 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:18.092981    9654 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:18.110907    9654 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:18.110963    9654 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:18.110982    9654 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:18.111002    9654 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:18.111011    9654 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr: exit status 7 (76.161021ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:21.123159    9657 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:21.123428    9657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:21.123433    9657 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:21.123437    9657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:21.123612    9657 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:21.123789    9657 out.go:298] Setting JSON to false
	I0718 21:22:21.123810    9657 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:21.123856    9657 notify.go:220] Checking for updates...
	I0718 21:22:21.124096    9657 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:21.124109    9657 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:21.124481    9657 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:21.141665    9657 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:21.141725    9657 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:21.141745    9657 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:21.141768    9657 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:21.141776    9657 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr: exit status 7 (80.175308ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:24.688376    9662 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:24.688597    9662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:24.688615    9662 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:24.688619    9662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:24.688818    9662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:24.689022    9662 out.go:298] Setting JSON to false
	I0718 21:22:24.689042    9662 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:24.689144    9662 notify.go:220] Checking for updates...
	I0718 21:22:24.689491    9662 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:24.689542    9662 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:24.689992    9662 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:24.708457    9662 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:24.708526    9662 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:24.708546    9662 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:24.708565    9662 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:24.708572    9662 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr: exit status 7 (79.026588ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:29.703672    9665 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:29.703961    9665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:29.703967    9665 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:29.703971    9665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:29.704146    9665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:29.704315    9665 out.go:298] Setting JSON to false
	I0718 21:22:29.704338    9665 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:29.704375    9665 notify.go:220] Checking for updates...
	I0718 21:22:29.704615    9665 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:29.704630    9665 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:29.704998    9665 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:29.722547    9665 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:29.722607    9665 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:29.722633    9665 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:29.722653    9665 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:29.722671    9665 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr: exit status 7 (78.176014ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:35.852575    9672 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:35.852769    9672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:35.852774    9672 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:35.852777    9672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:35.852951    9672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:35.853139    9672 out.go:298] Setting JSON to false
	I0718 21:22:35.853164    9672 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:35.853213    9672 notify.go:220] Checking for updates...
	I0718 21:22:35.853459    9672 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:35.853476    9672 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:35.853878    9672 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:35.871841    9672 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:35.871919    9672 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:35.871938    9672 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:35.871960    9672 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:35.871968    9672 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr: exit status 7 (76.905421ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:22:48.921526    9687 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:22:48.921700    9687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:48.921705    9687 out.go:304] Setting ErrFile to fd 2...
	I0718 21:22:48.921709    9687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:22:48.921877    9687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:22:48.922046    9687 out.go:298] Setting JSON to false
	I0718 21:22:48.922067    9687 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:22:48.922111    9687 notify.go:220] Checking for updates...
	I0718 21:22:48.922336    9687 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:22:48.922350    9687 status.go:255] checking status of multinode-409000 ...
	I0718 21:22:48.922758    9687 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:22:48.940315    9687 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:22:48.940378    9687 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:22:48.940403    9687 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:22:48.940427    9687 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:22:48.940435    9687 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr: exit status 7 (77.305929ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:23:02.819594    9696 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:23:02.819858    9696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:23:02.819868    9696 out.go:304] Setting ErrFile to fd 2...
	I0718 21:23:02.819873    9696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:23:02.820047    9696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:23:02.820221    9696 out.go:298] Setting JSON to false
	I0718 21:23:02.820243    9696 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:23:02.820285    9696 notify.go:220] Checking for updates...
	I0718 21:23:02.821538    9696 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:23:02.821556    9696 status.go:255] checking status of multinode-409000 ...
	I0718 21:23:02.821933    9696 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:02.839405    9696 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:02.839470    9696 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:23:02.839491    9696 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:23:02.839516    9696 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:23:02.839523    9696 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-409000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "83ec88f1a5b059b6c0cdc24a83ac4d67116954f7eb533ea93561cd307a70d602",
	        "Created": "2024-07-19T04:14:40.679070844Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (73.135336ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:23:02.933809    9700 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (47.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (790.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-409000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-409000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-409000: exit status 82 (13.513376851s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-409000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-409000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-409000 --wait=true -v=8 --alsologtostderr
E0718 21:24:15.642700    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:24:32.588366    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:25:44.210776    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:29:32.580918    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:30:27.254195    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:30:44.204358    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:34:32.674179    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:35:44.299011    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-409000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m56.677871445s)

                                                
                                                
-- stdout --
	* [multinode-409000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-409000" primary control-plane node in "multinode-409000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* docker "multinode-409000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-409000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:23:16.560775    9720 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:23:16.561382    9720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:23:16.561391    9720 out.go:304] Setting ErrFile to fd 2...
	I0718 21:23:16.561398    9720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:23:16.562228    9720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:23:16.563762    9720 out.go:298] Setting JSON to false
	I0718 21:23:16.587117    9720 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4969,"bootTime":1721358027,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 21:23:16.587223    9720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:23:16.609230    9720 out.go:177] * [multinode-409000] minikube v1.33.1 on Darwin 14.5
	I0718 21:23:16.650963    9720 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:23:16.651047    9720 notify.go:220] Checking for updates...
	I0718 21:23:16.693986    9720 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 21:23:16.715925    9720 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 21:23:16.736880    9720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:23:16.758008    9720 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	I0718 21:23:16.778943    9720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:23:16.800752    9720 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:23:16.800927    9720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:23:16.826300    9720 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 21:23:16.826468    9720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:23:16.908302    9720 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-19 04:23:16.899106241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:23:16.950828    9720 out.go:177] * Using the docker driver based on existing profile
	I0718 21:23:16.971966    9720 start.go:297] selected driver: docker
	I0718 21:23:16.972009    9720 start.go:901] validating driver "docker" against &{Name:multinode-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-409000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:23:16.972122    9720 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:23:16.972330    9720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:23:17.054509    9720 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-19 04:23:17.046029548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:23:17.057659    9720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:23:17.057725    9720 cni.go:84] Creating CNI manager for ""
	I0718 21:23:17.057735    9720 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 21:23:17.057817    9720 start.go:340] cluster config:
	{Name:multinode-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:23:17.099754    9720 out.go:177] * Starting "multinode-409000" primary control-plane node in "multinode-409000" cluster
	I0718 21:23:17.121012    9720 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 21:23:17.142987    9720 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0718 21:23:17.184963    9720 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:23:17.185041    9720 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 21:23:17.185029    9720 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 21:23:17.185060    9720 cache.go:56] Caching tarball of preloaded images
	I0718 21:23:17.185269    9720 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:23:17.185287    9720 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:23:17.186121    9720 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/multinode-409000/config.json ...
	W0718 21:23:17.210693    9720 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0718 21:23:17.210706    9720 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 21:23:17.210842    9720 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 21:23:17.210869    9720 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0718 21:23:17.210878    9720 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0718 21:23:17.210888    9720 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 21:23:17.210894    9720 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0718 21:23:17.214133    9720 image.go:273] response: 
	I0718 21:23:17.353008    9720 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0718 21:23:17.353060    9720 cache.go:194] Successfully downloaded all kic artifacts
	I0718 21:23:17.353106    9720 start.go:360] acquireMachinesLock for multinode-409000: {Name:mkbdc3ca6460cbeb89ccd0dcec6987ecea99db54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:23:17.353208    9720 start.go:364] duration metric: took 84.284µs to acquireMachinesLock for "multinode-409000"
	I0718 21:23:17.353230    9720 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:23:17.353241    9720 fix.go:54] fixHost starting: 
	I0718 21:23:17.353483    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:17.370729    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:17.370803    9720 fix.go:112] recreateIfNeeded on multinode-409000: state= err=unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:17.370824    9720 fix.go:117] machineExists: false. err=machine does not exist
	I0718 21:23:17.412604    9720 out.go:177] * docker "multinode-409000" container is missing, will recreate.
	I0718 21:23:17.433788    9720 delete.go:124] DEMOLISHING multinode-409000 ...
	I0718 21:23:17.433891    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:17.450909    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	W0718 21:23:17.450965    9720 stop.go:83] unable to get state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:17.450989    9720 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:17.451391    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:17.468571    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:17.468620    9720 delete.go:82] Unable to get host status for multinode-409000, assuming it has already been deleted: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:17.468705    9720 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-409000
	W0718 21:23:17.486634    9720 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-409000 returned with exit code 1
	I0718 21:23:17.486664    9720 kic.go:371] could not find the container multinode-409000 to remove it. will try anyways
	I0718 21:23:17.486748    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:17.503699    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	W0718 21:23:17.503747    9720 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:17.503827    9720 cli_runner.go:164] Run: docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0"
	W0718 21:23:17.520413    9720 cli_runner.go:211] docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 21:23:17.520442    9720 oci.go:650] error shutdown multinode-409000: docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:18.520821    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:18.537895    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:18.537940    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:18.537951    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:23:18.537988    9720 retry.go:31] will retry after 250.062849ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:18.788964    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:18.806102    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:18.806156    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:18.806165    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:23:18.806190    9720 retry.go:31] will retry after 961.228702ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:19.767576    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:19.785012    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:19.785054    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:19.785062    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:23:19.785089    9720 retry.go:31] will retry after 698.123164ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:20.484041    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:20.500977    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:20.501022    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:20.501030    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:23:20.501054    9720 retry.go:31] will retry after 1.249946741s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:21.753154    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:21.771424    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:21.771467    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:21.771476    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:23:21.771503    9720 retry.go:31] will retry after 2.805058569s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:24.577118    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:24.594195    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:24.594239    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:24.594248    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:23:24.594281    9720 retry.go:31] will retry after 3.405617817s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:28.001104    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:28.020940    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:28.020986    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:28.021001    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:23:28.021027    9720 retry.go:31] will retry after 8.323821595s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:36.345261    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:23:36.365091    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:23:36.365149    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:23:36.365160    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:23:36.365189    9720 oci.go:88] couldn't shut down multinode-409000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	 
	I0718 21:23:36.365260    9720 cli_runner.go:164] Run: docker rm -f -v multinode-409000
	I0718 21:23:36.383134    9720 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-409000
	W0718 21:23:36.401115    9720 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-409000 returned with exit code 1
	I0718 21:23:36.401273    9720 cli_runner.go:164] Run: docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:23:36.418603    9720 cli_runner.go:164] Run: docker network rm multinode-409000
	I0718 21:23:36.501863    9720 fix.go:124] Sleeping 1 second for extra luck!
	I0718 21:23:37.502078    9720 start.go:125] createHost starting for "" (driver="docker")
	I0718 21:23:37.524033    9720 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 21:23:37.524228    9720 start.go:159] libmachine.API.Create for "multinode-409000" (driver="docker")
	I0718 21:23:37.524277    9720 client.go:168] LocalClient.Create starting
	I0718 21:23:37.524488    9720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 21:23:37.524590    9720 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:37.524625    9720 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:37.524719    9720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 21:23:37.524803    9720 main.go:141] libmachine: Decoding PEM data...
	I0718 21:23:37.524819    9720 main.go:141] libmachine: Parsing certificate...
	I0718 21:23:37.525499    9720 cli_runner.go:164] Run: docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 21:23:37.544112    9720 cli_runner.go:211] docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 21:23:37.544216    9720 network_create.go:284] running [docker network inspect multinode-409000] to gather additional debugging logs...
	I0718 21:23:37.544233    9720 cli_runner.go:164] Run: docker network inspect multinode-409000
	W0718 21:23:37.562403    9720 cli_runner.go:211] docker network inspect multinode-409000 returned with exit code 1
	I0718 21:23:37.562432    9720 network_create.go:287] error running [docker network inspect multinode-409000]: docker network inspect multinode-409000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-409000 not found
	I0718 21:23:37.562444    9720 network_create.go:289] output of [docker network inspect multinode-409000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-409000 not found
	
	** /stderr **
	I0718 21:23:37.562583    9720 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:23:37.581379    9720 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:23:37.583123    9720 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:23:37.583624    9720 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001525700}
	I0718 21:23:37.583654    9720 network_create.go:124] attempt to create docker network multinode-409000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0718 21:23:37.583748    9720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000
	I0718 21:23:37.648811    9720 network_create.go:108] docker network multinode-409000 192.168.67.0/24 created
	I0718 21:23:37.648852    9720 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-409000" container
	I0718 21:23:37.648981    9720 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 21:23:37.666937    9720 cli_runner.go:164] Run: docker volume create multinode-409000 --label name.minikube.sigs.k8s.io=multinode-409000 --label created_by.minikube.sigs.k8s.io=true
	I0718 21:23:37.683874    9720 oci.go:103] Successfully created a docker volume multinode-409000
	I0718 21:23:37.683981    9720 cli_runner.go:164] Run: docker run --rm --name multinode-409000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-409000 --entrypoint /usr/bin/test -v multinode-409000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 21:23:37.934079    9720 oci.go:107] Successfully prepared a docker volume multinode-409000
	I0718 21:23:37.934121    9720 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:23:37.934134    9720 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 21:23:37.934237    9720 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-409000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 21:29:37.516758    9720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 21:29:37.516895    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:37.537124    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:37.537226    9720 retry.go:31] will retry after 297.627504ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:37.835443    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:37.854810    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:37.854926    9720 retry.go:31] will retry after 352.163341ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:38.209524    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:38.229271    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:38.229366    9720 retry.go:31] will retry after 330.409952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:38.560212    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:38.579386    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:38.579498    9720 retry.go:31] will retry after 869.101077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:39.449966    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:39.472634    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:29:39.472744    9720 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:29:39.472761    9720 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:39.472827    9720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 21:29:39.472879    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:39.489990    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:39.490097    9720 retry.go:31] will retry after 370.848972ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:39.861778    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:39.881961    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:39.882058    9720 retry.go:31] will retry after 392.896447ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:40.276672    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:40.295279    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:40.295381    9720 retry.go:31] will retry after 614.322977ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:40.912100    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:40.931679    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:29:40.931782    9720 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:29:40.931797    9720 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:40.931811    9720 start.go:128] duration metric: took 6m3.439716899s to createHost
	I0718 21:29:40.931885    9720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 21:29:40.931942    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:40.949223    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:40.949313    9720 retry.go:31] will retry after 252.331165ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:41.204001    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:41.223567    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:41.223675    9720 retry.go:31] will retry after 555.997502ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:41.779956    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:41.799222    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:41.799312    9720 retry.go:31] will retry after 735.556686ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:42.537254    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:42.557360    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:29:42.557475    9720 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:29:42.557497    9720 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:42.557556    9720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 21:29:42.557614    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:42.575254    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:42.575351    9720 retry.go:31] will retry after 295.506206ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:42.873245    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:42.893290    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:42.893381    9720 retry.go:31] will retry after 261.98018ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:43.156426    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:43.176452    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:29:43.176548    9720 retry.go:31] will retry after 560.289644ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:43.737659    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:29:43.755409    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:29:43.755517    9720 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:29:43.755531    9720 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:43.755542    9720 fix.go:56] duration metric: took 6m26.412941615s for fixHost
	I0718 21:29:43.755548    9720 start.go:83] releasing machines lock for "multinode-409000", held for 6m26.412969374s
	W0718 21:29:43.755563    9720 start.go:714] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0718 21:29:43.755628    9720 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0718 21:29:43.755634    9720 start.go:729] Will try again in 5 seconds ...
	I0718 21:29:48.757659    9720 start.go:360] acquireMachinesLock for multinode-409000: {Name:mkbdc3ca6460cbeb89ccd0dcec6987ecea99db54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:29:48.757957    9720 start.go:364] duration metric: took 252.937µs to acquireMachinesLock for "multinode-409000"
	I0718 21:29:48.757998    9720 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:29:48.758006    9720 fix.go:54] fixHost starting: 
	I0718 21:29:48.758454    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:48.778081    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:29:48.778137    9720 fix.go:112] recreateIfNeeded on multinode-409000: state= err=unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:48.778146    9720 fix.go:117] machineExists: false. err=machine does not exist
	I0718 21:29:48.799890    9720 out.go:177] * docker "multinode-409000" container is missing, will recreate.
	I0718 21:29:48.842593    9720 delete.go:124] DEMOLISHING multinode-409000 ...
	I0718 21:29:48.842829    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:48.860735    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	W0718 21:29:48.860778    9720 stop.go:83] unable to get state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:48.860799    9720 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:48.861170    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:48.878267    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:29:48.878314    9720 delete.go:82] Unable to get host status for multinode-409000, assuming it has already been deleted: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:48.878397    9720 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-409000
	W0718 21:29:48.895427    9720 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-409000 returned with exit code 1
	I0718 21:29:48.895457    9720 kic.go:371] could not find the container multinode-409000 to remove it. will try anyways
	I0718 21:29:48.895540    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:48.912671    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	W0718 21:29:48.912707    9720 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:48.912790    9720 cli_runner.go:164] Run: docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0"
	W0718 21:29:48.929888    9720 cli_runner.go:211] docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 21:29:48.929923    9720 oci.go:650] error shutdown multinode-409000: docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:49.932225    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:49.952277    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:29:49.952320    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:49.952334    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:29:49.952355    9720 retry.go:31] will retry after 725.367703ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:50.680088    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:50.699163    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:29:50.699205    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:50.699215    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:29:50.699243    9720 retry.go:31] will retry after 406.081688ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:51.107865    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:51.128722    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:29:51.128764    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:51.128773    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:29:51.128795    9720 retry.go:31] will retry after 653.835393ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:51.783708    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:51.803483    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:29:51.803526    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:51.803535    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:29:51.803561    9720 retry.go:31] will retry after 2.332388773s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:54.138286    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:54.158563    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:29:54.158605    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:54.158618    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:29:54.158643    9720 retry.go:31] will retry after 3.033820142s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:57.193200    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:29:57.212603    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:29:57.212647    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:29:57.212658    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:29:57.212677    9720 retry.go:31] will retry after 5.337189449s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:30:02.550334    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:30:02.568040    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:30:02.568082    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:30:02.568091    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:30:02.568114    9720 retry.go:31] will retry after 2.896107885s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:30:05.466575    9720 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:30:05.486994    9720 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:30:05.487039    9720 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:30:05.487048    9720 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:30:05.487097    9720 oci.go:88] couldn't shut down multinode-409000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	 
	I0718 21:30:05.487173    9720 cli_runner.go:164] Run: docker rm -f -v multinode-409000
	I0718 21:30:05.504869    9720 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-409000
	W0718 21:30:05.522068    9720 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-409000 returned with exit code 1
	I0718 21:30:05.522176    9720 cli_runner.go:164] Run: docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:30:05.539780    9720 cli_runner.go:164] Run: docker network rm multinode-409000
	I0718 21:30:05.617989    9720 fix.go:124] Sleeping 1 second for extra luck!
	I0718 21:30:06.619801    9720 start.go:125] createHost starting for "" (driver="docker")
	I0718 21:30:06.641375    9720 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 21:30:06.641543    9720 start.go:159] libmachine.API.Create for "multinode-409000" (driver="docker")
	I0718 21:30:06.641569    9720 client.go:168] LocalClient.Create starting
	I0718 21:30:06.641786    9720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 21:30:06.641897    9720 main.go:141] libmachine: Decoding PEM data...
	I0718 21:30:06.641924    9720 main.go:141] libmachine: Parsing certificate...
	I0718 21:30:06.642007    9720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 21:30:06.642095    9720 main.go:141] libmachine: Decoding PEM data...
	I0718 21:30:06.642110    9720 main.go:141] libmachine: Parsing certificate...
	I0718 21:30:06.664040    9720 cli_runner.go:164] Run: docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 21:30:06.684226    9720 cli_runner.go:211] docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 21:30:06.684319    9720 network_create.go:284] running [docker network inspect multinode-409000] to gather additional debugging logs...
	I0718 21:30:06.684336    9720 cli_runner.go:164] Run: docker network inspect multinode-409000
	W0718 21:30:06.701460    9720 cli_runner.go:211] docker network inspect multinode-409000 returned with exit code 1
	I0718 21:30:06.701485    9720 network_create.go:287] error running [docker network inspect multinode-409000]: docker network inspect multinode-409000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-409000 not found
	I0718 21:30:06.701496    9720 network_create.go:289] output of [docker network inspect multinode-409000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-409000 not found
	
	** /stderr **
	I0718 21:30:06.701644    9720 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:30:06.720801    9720 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:30:06.722427    9720 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:30:06.724136    9720 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:30:06.724613    9720 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013a1ba0}
	I0718 21:30:06.724631    9720 network_create.go:124] attempt to create docker network multinode-409000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0718 21:30:06.724739    9720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000
	W0718 21:30:06.742645    9720 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000 returned with exit code 1
	W0718 21:30:06.742684    9720 network_create.go:149] failed to create docker network multinode-409000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0718 21:30:06.742706    9720 network_create.go:116] failed to create docker network multinode-409000 192.168.76.0/24, will retry: subnet is taken
	I0718 21:30:06.744151    9720 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:30:06.744660    9720 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00070d670}
	I0718 21:30:06.744677    9720 network_create.go:124] attempt to create docker network multinode-409000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0718 21:30:06.744812    9720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000
	I0718 21:30:06.808373    9720 network_create.go:108] docker network multinode-409000 192.168.85.0/24 created
	I0718 21:30:06.808405    9720 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-409000" container
	I0718 21:30:06.808528    9720 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 21:30:06.826437    9720 cli_runner.go:164] Run: docker volume create multinode-409000 --label name.minikube.sigs.k8s.io=multinode-409000 --label created_by.minikube.sigs.k8s.io=true
	I0718 21:30:06.843748    9720 oci.go:103] Successfully created a docker volume multinode-409000
	I0718 21:30:06.843873    9720 cli_runner.go:164] Run: docker run --rm --name multinode-409000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-409000 --entrypoint /usr/bin/test -v multinode-409000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 21:30:07.082907    9720 oci.go:107] Successfully prepared a docker volume multinode-409000
	I0718 21:30:07.082943    9720 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:30:07.082955    9720 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 21:30:07.083059    9720 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-409000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0718 21:36:06.736472    9720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 21:36:06.736601    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:06.757328    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:06.757443    9720 retry.go:31] will retry after 324.037753ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:07.083898    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:07.103774    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:07.103880    9720 retry.go:31] will retry after 207.71744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:07.313908    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:07.334266    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:07.334379    9720 retry.go:31] will retry after 428.842495ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:07.764800    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:07.784609    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:07.784725    9720 retry.go:31] will retry after 705.1381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:08.491104    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:08.510859    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:36:08.510977    9720 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:36:08.510999    9720 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:08.511060    9720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 21:36:08.511116    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:08.528490    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:08.528585    9720 retry.go:31] will retry after 318.508407ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:08.849485    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:08.869526    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:08.869624    9720 retry.go:31] will retry after 382.09872ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:09.253889    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:09.274086    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:09.274192    9720 retry.go:31] will retry after 329.423823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:09.604468    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:09.624018    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:36:09.624123    9720 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:36:09.624139    9720 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:09.624149    9720 start.go:128] duration metric: took 6m2.911105051s to createHost
	I0718 21:36:09.624219    9720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 21:36:09.624279    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:09.641206    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:09.641298    9720 retry.go:31] will retry after 364.952797ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:10.008423    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:10.027202    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:10.027299    9720 retry.go:31] will retry after 561.204954ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:10.590857    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:10.610764    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:10.610858    9720 retry.go:31] will retry after 827.280778ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:11.440531    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:11.460732    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:36:11.460835    9720 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:36:11.460853    9720 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:11.460921    9720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0718 21:36:11.460975    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:11.479771    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:11.479865    9720 retry.go:31] will retry after 246.933433ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:11.728071    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:11.747491    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:11.747603    9720 retry.go:31] will retry after 472.713907ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:12.222710    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:12.242929    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:12.243024    9720 retry.go:31] will retry after 388.323339ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:12.633783    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:12.654406    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	I0718 21:36:12.654499    9720 retry.go:31] will retry after 438.632588ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:13.094698    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000
	W0718 21:36:13.114809    9720 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000 returned with exit code 1
	W0718 21:36:13.114908    9720 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	W0718 21:36:13.114925    9720 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-409000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:13.114936    9720 fix.go:56] duration metric: took 6m24.26429128s for fixHost
	I0718 21:36:13.114942    9720 start.go:83] releasing machines lock for "multinode-409000", held for 6m24.264331623s
	W0718 21:36:13.115020    9720 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-409000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-409000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0718 21:36:13.158476    9720 out.go:177] 
	W0718 21:36:13.179504    9720 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0718 21:36:13.179556    9720 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0718 21:36:13.179584    9720 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0718 21:36:13.200430    9720 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-409000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-409000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "152b6ae21d0b1ba8e39b1b308abfadf3d3710d6aeed3eab3c99dc241402a259e",
	        "Created": "2024-07-19T04:30:06.760873929Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (73.508028ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:36:13.428499   10477 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (790.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 node delete m03: exit status 80 (159.497348ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-409000 host status: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-409000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr: exit status 7 (73.903661ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:36:13.643107   10483 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:36:13.643365   10483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:36:13.643370   10483 out.go:304] Setting ErrFile to fd 2...
	I0718 21:36:13.643374   10483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:36:13.643554   10483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:36:13.643730   10483 out.go:298] Setting JSON to false
	I0718 21:36:13.643752   10483 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:36:13.643793   10483 notify.go:220] Checking for updates...
	I0718 21:36:13.644047   10483 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:36:13.644065   10483 status.go:255] checking status of multinode-409000 ...
	I0718 21:36:13.644516   10483 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:13.662070   10483 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:13.662126   10483 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:36:13.662145   10483 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:36:13.662169   10483 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:36:13.662177   10483 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "152b6ae21d0b1ba8e39b1b308abfadf3d3710d6aeed3eab3c99dc241402a259e",
	        "Created": "2024-07-19T04:30:06.760873929Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (73.584655ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:36:13.757018   10487 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (12.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 stop: exit status 82 (12.463027756s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	* Stopping node "multinode-409000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-409000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-409000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status: exit status 7 (73.616736ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:36:26.293766   10500 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:36:26.293779   10500 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr: exit status 7 (73.330354ms)

                                                
                                                
-- stdout --
	multinode-409000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:36:26.348587   10503 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:36:26.348856   10503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:36:26.348862   10503 out.go:304] Setting ErrFile to fd 2...
	I0718 21:36:26.348866   10503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:36:26.349037   10503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:36:26.349217   10503 out.go:298] Setting JSON to false
	I0718 21:36:26.349239   10503 mustload.go:65] Loading cluster: multinode-409000
	I0718 21:36:26.349283   10503 notify.go:220] Checking for updates...
	I0718 21:36:26.349511   10503 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:36:26.349528   10503 status.go:255] checking status of multinode-409000 ...
	I0718 21:36:26.349921   10503 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:26.367143   10503 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:26.367198   10503 status.go:330] multinode-409000 host status = "" (err=state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	)
	I0718 21:36:26.367219   10503 status.go:257] multinode-409000 status: &{Name:multinode-409000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0718 21:36:26.367241   10503 status.go:260] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	E0718 21:36:26.367247   10503 status.go:263] The "multinode-409000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr": multinode-409000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-409000 status --alsologtostderr": multinode-409000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "152b6ae21d0b1ba8e39b1b308abfadf3d3710d6aeed3eab3c99dc241402a259e",
	        "Created": "2024-07-19T04:30:06.760873929Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (74.270433ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:36:26.462679   10507 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (12.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (104.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-409000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-409000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m44.392382048s)

                                                
                                                
-- stdout --
	* [multinode-409000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-409000" primary control-plane node in "multinode-409000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* docker "multinode-409000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:36:26.517234   10510 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:36:26.517494   10510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:36:26.517500   10510 out.go:304] Setting ErrFile to fd 2...
	I0718 21:36:26.517503   10510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:36:26.517686   10510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 21:36:26.519127   10510 out.go:298] Setting JSON to false
	I0718 21:36:26.541595   10510 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5759,"bootTime":1721358027,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 21:36:26.541693   10510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:36:26.563934   10510 out.go:177] * [multinode-409000] minikube v1.33.1 on Darwin 14.5
	I0718 21:36:26.606517   10510 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:36:26.606584   10510 notify.go:220] Checking for updates...
	I0718 21:36:26.649052   10510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 21:36:26.670410   10510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 21:36:26.691416   10510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:36:26.712099   10510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	I0718 21:36:26.733383   10510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:36:26.755140   10510 config.go:182] Loaded profile config "multinode-409000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:36:26.755929   10510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:36:26.780357   10510 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 21:36:26.780543   10510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:36:26.861191   10510 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-19 04:36:26.85261062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:36:26.883058   10510 out.go:177] * Using the docker driver based on existing profile
	I0718 21:36:26.903900   10510 start.go:297] selected driver: docker
	I0718 21:36:26.903926   10510 start.go:901] validating driver "docker" against &{Name:multinode-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-409000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:36:26.904045   10510 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:36:26.904240   10510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 21:36:26.985571   10510 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-19 04:36:26.976974571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 21:36:26.988596   10510 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:36:26.988633   10510 cni.go:84] Creating CNI manager for ""
	I0718 21:36:26.988642   10510 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0718 21:36:26.988723   10510 start.go:340] cluster config:
	{Name:multinode-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:36:27.031903   10510 out.go:177] * Starting "multinode-409000" primary control-plane node in "multinode-409000" cluster
	I0718 21:36:27.053045   10510 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 21:36:27.073816   10510 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0718 21:36:27.116027   10510 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:36:27.116068   10510 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 21:36:27.116106   10510 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 21:36:27.116128   10510 cache.go:56] Caching tarball of preloaded images
	I0718 21:36:27.116359   10510 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:36:27.116378   10510 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:36:27.117148   10510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/multinode-409000/config.json ...
	W0718 21:36:27.142530   10510 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0718 21:36:27.142554   10510 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 21:36:27.142683   10510 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 21:36:27.142703   10510 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0718 21:36:27.142709   10510 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0718 21:36:27.142720   10510 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 21:36:27.142725   10510 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0718 21:36:27.145642   10510 image.go:273] response: 
	I0718 21:36:27.294921   10510 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0718 21:36:27.294988   10510 cache.go:194] Successfully downloaded all kic artifacts
	I0718 21:36:27.295037   10510 start.go:360] acquireMachinesLock for multinode-409000: {Name:mkbdc3ca6460cbeb89ccd0dcec6987ecea99db54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:36:27.295136   10510 start.go:364] duration metric: took 79.771µs to acquireMachinesLock for "multinode-409000"
	I0718 21:36:27.295162   10510 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:36:27.295173   10510 fix.go:54] fixHost starting: 
	I0718 21:36:27.295403   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:27.312669   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:27.312741   10510 fix.go:112] recreateIfNeeded on multinode-409000: state= err=unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:27.312760   10510 fix.go:117] machineExists: false. err=machine does not exist
	I0718 21:36:27.355018   10510 out.go:177] * docker "multinode-409000" container is missing, will recreate.
	I0718 21:36:27.376020   10510 delete.go:124] DEMOLISHING multinode-409000 ...
	I0718 21:36:27.376124   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:27.393040   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	W0718 21:36:27.393088   10510 stop.go:83] unable to get state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:27.393102   10510 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:27.393494   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:27.410508   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:27.410564   10510 delete.go:82] Unable to get host status for multinode-409000, assuming it has already been deleted: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:27.410653   10510 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-409000
	W0718 21:36:27.427680   10510 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-409000 returned with exit code 1
	I0718 21:36:27.427712   10510 kic.go:371] could not find the container multinode-409000 to remove it. will try anyways
	I0718 21:36:27.427788   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:27.445670   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	W0718 21:36:27.445716   10510 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:27.445811   10510 cli_runner.go:164] Run: docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0"
	W0718 21:36:27.462799   10510 cli_runner.go:211] docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0718 21:36:27.462839   10510 oci.go:650] error shutdown multinode-409000: docker exec --privileged -t multinode-409000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:28.463375   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:28.480677   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:28.480728   10510 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:28.480739   10510 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:36:28.480773   10510 retry.go:31] will retry after 405.535806ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:28.886899   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:28.903870   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:28.903916   10510 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:28.903927   10510 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:36:28.903949   10510 retry.go:31] will retry after 419.518955ms: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:29.323766   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:29.341157   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:29.341215   10510 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:29.341225   10510 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:36:29.341249   10510 retry.go:31] will retry after 1.199238548s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:30.540695   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:30.557800   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:30.557846   10510 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:30.557855   10510 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:36:30.557877   10510 retry.go:31] will retry after 1.148003402s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:31.706052   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:31.723335   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:31.723378   10510 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:31.723387   10510 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:36:31.723410   10510 retry.go:31] will retry after 1.758547571s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:33.482170   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:33.499530   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:33.499582   10510 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:33.499593   10510 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:36:33.499615   10510 retry.go:31] will retry after 2.167209059s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:35.667033   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:35.685120   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:35.685169   10510 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:35.685179   10510 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:36:35.685198   10510 retry.go:31] will retry after 3.569170391s: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:39.254602   10510 cli_runner.go:164] Run: docker container inspect multinode-409000 --format={{.State.Status}}
	W0718 21:36:39.275008   10510 cli_runner.go:211] docker container inspect multinode-409000 --format={{.State.Status}} returned with exit code 1
	I0718 21:36:39.275053   10510 oci.go:662] temporary error verifying shutdown: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	I0718 21:36:39.275064   10510 oci.go:664] temporary error: container multinode-409000 status is  but expect it to be exited
	I0718 21:36:39.275095   10510 oci.go:88] couldn't shut down multinode-409000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000
	 
	I0718 21:36:39.275164   10510 cli_runner.go:164] Run: docker rm -f -v multinode-409000
	I0718 21:36:39.293063   10510 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-409000
	W0718 21:36:39.310038   10510 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-409000 returned with exit code 1
	I0718 21:36:39.310150   10510 cli_runner.go:164] Run: docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:36:39.327492   10510 cli_runner.go:164] Run: docker network rm multinode-409000
	I0718 21:36:39.403025   10510 fix.go:124] Sleeping 1 second for extra luck!
	I0718 21:36:40.405196   10510 start.go:125] createHost starting for "" (driver="docker")
	I0718 21:36:40.428421   10510 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0718 21:36:40.428596   10510 start.go:159] libmachine.API.Create for "multinode-409000" (driver="docker")
	I0718 21:36:40.428642   10510 client.go:168] LocalClient.Create starting
	I0718 21:36:40.428849   10510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/ca.pem
	I0718 21:36:40.428961   10510 main.go:141] libmachine: Decoding PEM data...
	I0718 21:36:40.428999   10510 main.go:141] libmachine: Parsing certificate...
	I0718 21:36:40.429093   10510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1453/.minikube/certs/cert.pem
	I0718 21:36:40.429172   10510 main.go:141] libmachine: Decoding PEM data...
	I0718 21:36:40.429187   10510 main.go:141] libmachine: Parsing certificate...
	I0718 21:36:40.430055   10510 cli_runner.go:164] Run: docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0718 21:36:40.448927   10510 cli_runner.go:211] docker network inspect multinode-409000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0718 21:36:40.449019   10510 network_create.go:284] running [docker network inspect multinode-409000] to gather additional debugging logs...
	I0718 21:36:40.449034   10510 cli_runner.go:164] Run: docker network inspect multinode-409000
	W0718 21:36:40.466896   10510 cli_runner.go:211] docker network inspect multinode-409000 returned with exit code 1
	I0718 21:36:40.466919   10510 network_create.go:287] error running [docker network inspect multinode-409000]: docker network inspect multinode-409000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-409000 not found
	I0718 21:36:40.466932   10510 network_create.go:289] output of [docker network inspect multinode-409000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-409000 not found
	
	** /stderr **
	I0718 21:36:40.467069   10510 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0718 21:36:40.485912   10510 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:36:40.487718   10510 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0718 21:36:40.488090   10510 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018cc620}
	I0718 21:36:40.488106   10510 network_create.go:124] attempt to create docker network multinode-409000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0718 21:36:40.488180   10510 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-409000 multinode-409000
	I0718 21:36:40.551894   10510 network_create.go:108] docker network multinode-409000 192.168.67.0/24 created
	I0718 21:36:40.551935   10510 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-409000" container
	I0718 21:36:40.552045   10510 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0718 21:36:40.570315   10510 cli_runner.go:164] Run: docker volume create multinode-409000 --label name.minikube.sigs.k8s.io=multinode-409000 --label created_by.minikube.sigs.k8s.io=true
	I0718 21:36:40.587494   10510 oci.go:103] Successfully created a docker volume multinode-409000
	I0718 21:36:40.587616   10510 cli_runner.go:164] Run: docker run --rm --name multinode-409000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-409000 --entrypoint /usr/bin/test -v multinode-409000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0718 21:36:40.844181   10510 oci.go:107] Successfully prepared a docker volume multinode-409000
	I0718 21:36:40.844224   10510 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:36:40.844236   10510 kic.go:194] Starting extracting preloaded images to volume ...
	I0718 21:36:40.844332   10510 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-409000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-409000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-409000
helpers_test.go:235: (dbg) docker inspect multinode-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-409000",
	        "Id": "c8129061c11068601ee5449431282c1dd1bd01c25be7aee9785bbbed1f8c815c",
	        "Created": "2024-07-19T04:36:40.504088798Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-409000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-409000 -n multinode-409000: exit status 7 (73.457873ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:38:10.952529   10626 status.go:249] status error: host: state: unknown state "multinode-409000": docker container inspect multinode-409000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-409000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-409000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (104.49s)

                                                
                                    
x
+
TestScheduledStopUnix (300.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-677000 --memory=2048 --driver=docker 
E0718 21:40:44.291727    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:40:55.722371    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 21:44:32.663141    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-677000 --memory=2048 --driver=docker : signal: killed (5m0.004989306s)

                                                
                                                
-- stdout --
	* [scheduled-stop-677000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-677000" primary control-plane node in "scheduled-stop-677000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-677000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-677000" primary control-plane node in "scheduled-stop-677000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-18 21:44:43.208421 -0700 PDT m=+4784.874669455
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-677000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-677000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-677000",
	        "Id": "c30039d412264c43394e483a30c8efbae9eeddc0545076105f1af717d88c9d07",
	        "Created": "2024-07-19T04:39:44.277738105Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-677000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-677000 -n scheduled-stop-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-677000 -n scheduled-stop-677000: exit status 7 (78.257024ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:44:43.308282   11129 status.go:249] status error: host: state: unknown state "scheduled-stop-677000": docker container inspect scheduled-stop-677000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-677000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-677000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-677000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-677000
--- FAIL: TestScheduledStopUnix (300.55s)

                                                
                                    
x
+
TestSkaffold (300.54s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe3707274405 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe3707274405 version: (1.698352599s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-508000 --memory=2600 --driver=docker 
E0718 21:45:44.287295    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:47:07.338355    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:49:32.657947    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-508000 --memory=2600 --driver=docker : signal: killed (4m57.448096294s)

                                                
                                                
-- stdout --
	* [skaffold-508000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-508000" primary control-plane node in "skaffold-508000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-508000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-508000" primary control-plane node in "skaffold-508000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-07-18 21:49:43.75275 -0700 PDT m=+5085.424442883
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-508000
helpers_test.go:235: (dbg) docker inspect skaffold-508000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-508000",
	        "Id": "6aa283dffa97964bdf279cb5557d2e175e14a4449c912de3dc8df4963342aa41",
	        "Created": "2024-07-19T04:44:47.301014009Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-508000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-508000 -n skaffold-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-508000 -n skaffold-508000: exit status 7 (73.975356ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:49:43.849468   11257 status.go:249] status error: host: state: unknown state "skaffold-508000": docker container inspect skaffold-508000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-508000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-508000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-508000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-508000
--- FAIL: TestSkaffold (300.54s)

                                                
                                    
x
+
TestInsufficientStorage (300.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-664000 --memory=2048 --output=json --wait=true --driver=docker 
E0718 21:50:44.280885    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 21:54:32.652405    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-664000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.004380844s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0c907f51-af5c-4187-9ec5-ed81021c502f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-664000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1005b31a-5700-4ddc-bb44-a59ddb77944b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"e0f8817c-254d-42c0-ae94-fdbbf2a67c19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig"}}
	{"specversion":"1.0","id":"8ad59ddb-5c64-4fdf-93fb-9195d33b6b6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"01bfc1ab-a85f-454c-91bb-e8c51f1c543d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7abb8bc6-bfc6-4259-ac97-003ac5297713","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube"}}
	{"specversion":"1.0","id":"cdec1371-33c9-4d21-9da9-ef15bc686faa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"63df39e6-841c-42ad-86a0-9f4569d03fc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fad3e8c9-c206-43ba-a6b3-0b24dee432ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8ebd44b6-8f0c-4c14-a13e-3cdbd41401b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0af62f0b-4bc4-472d-86d8-d47f49f6bc30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"81ba0002-3c5a-4269-85c8-c33fd0acf97e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-664000\" primary control-plane node in \"insufficient-storage-664000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0de517fd-2d9e-49e6-9cbd-f15d9b710e35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721324606-19298 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac018f94-7dcd-44f3-8999-99da2c3ab934","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-664000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-664000 --output=json --layout=cluster: context deadline exceeded (817ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-664000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-664000
--- FAIL: TestInsufficientStorage (300.45s)

                                                
                                    

Test pass (176/217)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.68
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.65
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 12.02
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.29
18 TestDownloadOnly/v1.30.3/DeleteAll 0.34
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 19.47
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.34
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
29 TestDownloadOnlyKic 1.57
30 TestBinaryMirror 1.32
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 224.54
40 TestAddons/parallel/InspektorGadget 10.81
41 TestAddons/parallel/MetricsServer 6.67
42 TestAddons/parallel/HelmTiller 9.66
44 TestAddons/parallel/CSI 68.11
45 TestAddons/parallel/Headlamp 14.36
46 TestAddons/parallel/CloudSpanner 5.52
47 TestAddons/parallel/LocalPath 50.94
48 TestAddons/parallel/NvidiaDevicePlugin 6.54
49 TestAddons/parallel/Yakd 5.01
50 TestAddons/parallel/Volcano 35.31
53 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestAddons/StoppedEnableDisable 11.48
62 TestHyperKitDriverInstallOrUpdate 6.76
65 TestErrorSpam/setup 20.76
66 TestErrorSpam/start 2.13
67 TestErrorSpam/status 0.78
68 TestErrorSpam/pause 1.4
69 TestErrorSpam/unpause 1.42
70 TestErrorSpam/stop 2.3
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 74.83
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 33.99
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.14
82 TestFunctional/serial/CacheCmd/cache/add_local 1.4
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.4
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.14
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.48
90 TestFunctional/serial/ExtraConfig 42.02
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 3.01
93 TestFunctional/serial/LogsFileCmd 2.78
94 TestFunctional/serial/InvalidService 3.92
96 TestFunctional/parallel/ConfigCmd 0.5
97 TestFunctional/parallel/DashboardCmd 9.95
98 TestFunctional/parallel/DryRun 1.45
99 TestFunctional/parallel/InternationalLanguage 0.61
100 TestFunctional/parallel/StatusCmd 0.79
105 TestFunctional/parallel/AddonsCmd 0.27
106 TestFunctional/parallel/PersistentVolumeClaim 27.83
108 TestFunctional/parallel/SSHCmd 0.5
109 TestFunctional/parallel/CpCmd 1.64
110 TestFunctional/parallel/MySQL 28.62
111 TestFunctional/parallel/FileSync 0.26
112 TestFunctional/parallel/CertSync 1.56
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
120 TestFunctional/parallel/License 0.36
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.15
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
132 TestFunctional/parallel/ServiceCmd/DeployApp 8.13
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
134 TestFunctional/parallel/ProfileCmd/profile_list 0.36
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
136 TestFunctional/parallel/ServiceCmd/List 0.65
137 TestFunctional/parallel/MountCmd/any-port 6.58
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
139 TestFunctional/parallel/ServiceCmd/HTTPS 15
140 TestFunctional/parallel/MountCmd/specific-port 1.74
141 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
142 TestFunctional/parallel/ServiceCmd/Format 15
143 TestFunctional/parallel/Version/short 0.13
144 TestFunctional/parallel/Version/components 0.68
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.81
150 TestFunctional/parallel/ImageCommands/Setup 1.81
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.01
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.6
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
158 TestFunctional/parallel/ServiceCmd/URL 15
159 TestFunctional/parallel/DockerEnv/bash 0.95
160 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
161 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
162 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 102.82
170 TestMultiControlPlane/serial/DeployApp 6.9
171 TestMultiControlPlane/serial/PingHostFromPods 1.39
172 TestMultiControlPlane/serial/AddWorkerNode 20.1
173 TestMultiControlPlane/serial/NodeLabels 0.06
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.66
175 TestMultiControlPlane/serial/CopyFile 15.91
176 TestMultiControlPlane/serial/StopSecondaryNode 11.31
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.48
178 TestMultiControlPlane/serial/RestartSecondaryNode 36.82
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.64
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 285.64
181 TestMultiControlPlane/serial/DeleteSecondaryNode 10.4
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
183 TestMultiControlPlane/serial/StopCluster 32.34
184 TestMultiControlPlane/serial/RestartCluster 81.41
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.47
186 TestMultiControlPlane/serial/AddSecondaryNode 33.75
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.66
190 TestImageBuild/serial/Setup 22.24
191 TestImageBuild/serial/NormalBuild 1.53
192 TestImageBuild/serial/BuildWithBuildArg 0.8
193 TestImageBuild/serial/BuildWithDockerIgnore 0.62
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.64
198 TestJSONOutput/start/Command 38.5
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.46
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.55
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 5.66
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.69
223 TestKicCustomNetwork/create_custom_network 22.88
224 TestKicCustomNetwork/use_default_bridge_network 22.34
225 TestKicExistingNetwork 21.91
226 TestKicCustomSubnet 22.54
227 TestKicStaticIP 22.69
228 TestMainNoArgs 0.08
229 TestMinikubeProfile 47.16
232 TestMountStart/serial/StartWithMountFirst 7.08
233 TestMountStart/serial/VerifyMountFirst 0.25
234 TestMountStart/serial/StartWithMountSecond 7.68
235 TestMountStart/serial/VerifyMountSecond 0.25
236 TestMountStart/serial/DeleteFirst 1.65
237 TestMountStart/serial/VerifyMountPostDelete 0.29
238 TestMountStart/serial/Stop 1.41
239 TestMountStart/serial/RestartStopped 8.68
259 TestPreload 91.73
280 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.3
281 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.38
x
+
TestDownloadOnly/v1.20.0/json-events (11.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-483000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-483000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (11.682287226s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-483000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-483000: exit status 85 (289.881848ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-483000 | jenkins | v1.33.1 | 18 Jul 24 20:24 PDT |          |
	|         | -p download-only-483000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:24:58
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:24:58.321823    1995 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:24:58.322070    1995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:24:58.322076    1995 out.go:304] Setting ErrFile to fd 2...
	I0718 20:24:58.322079    1995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:24:58.322259    1995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	W0718 20:24:58.322407    1995 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19302-1453/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19302-1453/.minikube/config/config.json: no such file or directory
	I0718 20:24:58.324778    1995 out.go:298] Setting JSON to true
	I0718 20:24:58.349533    1995 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1471,"bootTime":1721358027,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 20:24:58.349627    1995 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:24:58.369791    1995 out.go:97] [download-only-483000] minikube v1.33.1 on Darwin 14.5
	I0718 20:24:58.370048    1995 notify.go:220] Checking for updates...
	W0718 20:24:58.370048    1995 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball: no such file or directory
	I0718 20:24:58.391966    1995 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:24:58.415977    1995 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 20:24:58.437700    1995 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:24:58.459045    1995 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:24:58.481015    1995 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	W0718 20:24:58.523942    1995 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:24:58.524392    1995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:24:58.550942    1995 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 20:24:58.551074    1995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 20:24:58.634032    1995 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-19 03:24:58.625689135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 20:24:58.655917    1995 out.go:97] Using the docker driver based on user configuration
	I0718 20:24:58.655998    1995 start.go:297] selected driver: docker
	I0718 20:24:58.656009    1995 start.go:901] validating driver "docker" against <nil>
	I0718 20:24:58.656232    1995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 20:24:58.737781    1995 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-19 03:24:58.729694468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 20:24:58.737980    1995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:24:58.742146    1995 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0718 20:24:58.742675    1995 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:24:58.763767    1995 out.go:169] Using Docker Desktop driver with root privileges
	I0718 20:24:58.784833    1995 cni.go:84] Creating CNI manager for ""
	I0718 20:24:58.784878    1995 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0718 20:24:58.784991    1995 start.go:340] cluster config:
	{Name:download-only-483000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-483000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:24:58.805879    1995 out.go:97] Starting "download-only-483000" primary control-plane node in "download-only-483000" cluster
	I0718 20:24:58.805973    1995 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 20:24:58.827728    1995 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0718 20:24:58.827787    1995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:24:58.827882    1995 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 20:24:58.846297    1995 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 20:24:58.846552    1995 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 20:24:58.846691    1995 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 20:24:58.883524    1995 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0718 20:24:58.883549    1995 cache.go:56] Caching tarball of preloaded images
	I0718 20:24:58.883920    1995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:24:58.905869    1995 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0718 20:24:58.905923    1995 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:24:58.993002    1995 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0718 20:25:03.384828    1995 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 20:25:08.201863    1995 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:08.202043    1995 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-483000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-483000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-483000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-057000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-057000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker : (12.017899271s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-057000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-057000: exit status 85 (290.452654ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-483000 | jenkins | v1.33.1 | 18 Jul 24 20:24 PDT |                     |
	|         | -p download-only-483000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| delete  | -p download-only-483000        | download-only-483000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| start   | -o=json --download-only        | download-only-057000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-057000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:25:11
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:25:11.157787    2057 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:25:11.158050    2057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:11.158055    2057 out.go:304] Setting ErrFile to fd 2...
	I0718 20:25:11.158059    2057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:11.158219    2057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 20:25:11.159623    2057 out.go:298] Setting JSON to true
	I0718 20:25:11.181635    2057 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1484,"bootTime":1721358027,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 20:25:11.181717    2057 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:25:11.203121    2057 out.go:97] [download-only-057000] minikube v1.33.1 on Darwin 14.5
	I0718 20:25:11.203354    2057 notify.go:220] Checking for updates...
	I0718 20:25:11.224776    2057 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:25:11.245870    2057 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 20:25:11.267811    2057 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:25:11.288889    2057 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:25:11.309868    2057 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	W0718 20:25:11.351807    2057 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:25:11.352321    2057 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:25:11.376433    2057 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 20:25:11.376595    2057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 20:25:11.456693    2057 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-19 03:25:11.448960302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 20:25:11.477884    2057 out.go:97] Using the docker driver based on user configuration
	I0718 20:25:11.477941    2057 start.go:297] selected driver: docker
	I0718 20:25:11.477953    2057 start.go:901] validating driver "docker" against <nil>
	I0718 20:25:11.478181    2057 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 20:25:11.561996    2057 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-19 03:25:11.551730117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 20:25:11.562168    2057 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:25:11.565004    2057 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0718 20:25:11.565145    2057 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:25:11.586900    2057 out.go:169] Using Docker Desktop driver with root privileges
	I0718 20:25:11.607755    2057 cni.go:84] Creating CNI manager for ""
	I0718 20:25:11.607820    2057 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 20:25:11.607848    2057 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 20:25:11.607973    2057 start.go:340] cluster config:
	{Name:download-only-057000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-057000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:25:11.630074    2057 out.go:97] Starting "download-only-057000" primary control-plane node in "download-only-057000" cluster
	I0718 20:25:11.630137    2057 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 20:25:11.651922    2057 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0718 20:25:11.651986    2057 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:25:11.652082    2057 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 20:25:11.670375    2057 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 20:25:11.670565    2057 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 20:25:11.670584    2057 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0718 20:25:11.670590    2057 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0718 20:25:11.670598    2057 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 20:25:11.708492    2057 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 20:25:11.708520    2057 cache.go:56] Caching tarball of preloaded images
	I0718 20:25:11.708900    2057 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:25:11.730978    2057 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0718 20:25:11.731029    2057 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:11.810740    2057 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 20:25:15.209008    2057 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:15.209283    2057 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:15.700910    2057 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 20:25:15.701207    2057 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/download-only-057000/config.json ...
	I0718 20:25:15.701251    2057 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/download-only-057000/config.json: {Name:mkc2d424ec093a6c6324740a755f9f2b60b20ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:25:15.701594    2057 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:25:15.701871    2057 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/darwin/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-057000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-057000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-057000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (19.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-149000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-149000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker : (19.469634344s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (19.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-149000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-149000: exit status 85 (293.957177ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-483000 | jenkins | v1.33.1 | 18 Jul 24 20:24 PDT |                     |
	|         | -p download-only-483000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| delete  | -p download-only-483000             | download-only-483000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| start   | -o=json --download-only             | download-only-057000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-057000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| delete  | -p download-only-057000             | download-only-057000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| start   | -o=json --download-only             | download-only-149000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-149000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:25:24
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:25:24.015598    2106 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:25:24.016184    2106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:24.016193    2106 out.go:304] Setting ErrFile to fd 2...
	I0718 20:25:24.016199    2106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:24.016735    2106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 20:25:24.018332    2106 out.go:298] Setting JSON to true
	I0718 20:25:24.040442    2106 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1497,"bootTime":1721358027,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 20:25:24.040531    2106 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:25:24.061876    2106 out.go:97] [download-only-149000] minikube v1.33.1 on Darwin 14.5
	I0718 20:25:24.062143    2106 notify.go:220] Checking for updates...
	I0718 20:25:24.083479    2106 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:25:24.104592    2106 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 20:25:24.125670    2106 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:25:24.147601    2106 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:25:24.168717    2106 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	W0718 20:25:24.211440    2106 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:25:24.211936    2106 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:25:24.238134    2106 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 20:25:24.238291    2106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 20:25:24.317241    2106 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-19 03:25:24.309124024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 20:25:24.338909    2106 out.go:97] Using the docker driver based on user configuration
	I0718 20:25:24.338969    2106 start.go:297] selected driver: docker
	I0718 20:25:24.338980    2106 start.go:901] validating driver "docker" against <nil>
	I0718 20:25:24.339182    2106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 20:25:24.425297    2106 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-19 03:25:24.417282894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 20:25:24.425477    2106 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:25:24.428283    2106 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0718 20:25:24.428421    2106 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:25:24.449871    2106 out.go:169] Using Docker Desktop driver with root privileges
	I0718 20:25:24.471679    2106 cni.go:84] Creating CNI manager for ""
	I0718 20:25:24.471725    2106 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 20:25:24.471751    2106 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 20:25:24.471905    2106 start.go:340] cluster config:
	{Name:download-only-149000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:25:24.493740    2106 out.go:97] Starting "download-only-149000" primary control-plane node in "download-only-149000" cluster
	I0718 20:25:24.493789    2106 cache.go:121] Beginning downloading kic base image for docker with docker
	I0718 20:25:24.515664    2106 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0718 20:25:24.515783    2106 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 20:25:24.515863    2106 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0718 20:25:24.534247    2106 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0718 20:25:24.534427    2106 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0718 20:25:24.534455    2106 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0718 20:25:24.534461    2106 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0718 20:25:24.534469    2106 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0718 20:25:24.574084    2106 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0718 20:25:24.574108    2106 cache.go:56] Caching tarball of preloaded images
	I0718 20:25:24.574487    2106 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 20:25:24.596364    2106 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0718 20:25:24.596412    2106 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:24.681791    2106 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0718 20:25:34.045892    2106 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:34.046343    2106 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:34.506716    2106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0718 20:25:34.506953    2106 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/download-only-149000/config.json ...
	I0718 20:25:34.506976    2106 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/download-only-149000/config.json: {Name:mkda477b661dfb8d70011431a4a9041f15ce0002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:25:34.507337    2106 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 20:25:34.507582    2106 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1453/.minikube/cache/darwin/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-149000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-149000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-149000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-294000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-294000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-294000
--- PASS: TestDownloadOnlyKic (1.57s)

                                                
                                    
x
+
TestBinaryMirror (1.32s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-926000 --alsologtostderr --binary-mirror http://127.0.0.1:49347 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-926000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-926000
--- PASS: TestBinaryMirror (1.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-659000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-659000: exit status 85 (208.922437ms)

                                                
                                                
-- stdout --
	* Profile "addons-659000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-659000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-659000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-659000: exit status 85 (188.155398ms)

                                                
                                                
-- stdout --
	* Profile "addons-659000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-659000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (224.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-659000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-659000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m44.54336205s)
--- PASS: TestAddons/Setup (224.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-94cw5" [db5c545c-9094-41dc-a94f-588c42aac16d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004431795s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-659000
addons_test.go:843: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-659000: (5.801726516s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.594335ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-fmm68" [95c17d4a-30d5-4600-9765-e647ef504d79] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004510404s
addons_test.go:417: (dbg) Run:  kubectl --context addons-659000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-659000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.67s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.66s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.210766ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-mw5h5" [8577a567-dec3-4dcb-b1d3-5f6e8d90ec0c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006405937s
addons_test.go:475: (dbg) Run:  kubectl --context addons-659000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-659000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.127677011s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-659000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.077563ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2bdc3992-f950-400d-b806-8c9b6be36951] Pending
helpers_test.go:344: "task-pv-pod" [2bdc3992-f950-400d-b806-8c9b6be36951] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2bdc3992-f950-400d-b806-8c9b6be36951] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005886328s
addons_test.go:586: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-659000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-659000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-659000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-659000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-659000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [25e51462-a7eb-475d-a0fd-45e73e042988] Pending
helpers_test.go:344: "task-pv-pod-restore" [25e51462-a7eb-475d-a0fd-45e73e042988] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [25e51462-a7eb-475d-a0fd-45e73e042988] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003672205s
addons_test.go:628: (dbg) Run:  kubectl --context addons-659000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-659000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-659000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-amd64 -p addons-659000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-amd64 -p addons-659000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.562712423s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-659000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-659000 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-659000 --alsologtostderr -v=1: (1.350856161s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-6d8hm" [6277ca74-759e-4625-9050-70ce7b3a35a2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-6d8hm" [6277ca74-759e-4625-9050-70ce7b3a35a2] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.00498112s
--- PASS: TestAddons/parallel/Headlamp (14.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-xhb7f" [ed64b159-55da-4254-9eaa-352cd6082386] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004642359s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-659000
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-659000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-659000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3003e242-3c3c-4c2d-9e44-52fd2cd638c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3003e242-3c3c-4c2d-9e44-52fd2cd638c1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3003e242-3c3c-4c2d-9e44-52fd2cd638c1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004442991s
addons_test.go:992: (dbg) Run:  kubectl --context addons-659000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-amd64 -p addons-659000 ssh "cat /opt/local-path-provisioner/pvc-b0a64b8e-ab72-4d71-9859-5a2bfd2f863e_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-659000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-659000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 -p addons-659000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-amd64 -p addons-659000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (40.032934317s)
--- PASS: TestAddons/parallel/LocalPath (50.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-crpl8" [f235905c-5b4c-4ee2-8ff3-e5fdabaa2557] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004713497s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-659000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-b6rr6" [fce00e0f-547f-4dad-bedb-11e20dd31479] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004537136s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (35.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 2.470908ms
addons_test.go:889: volcano-scheduler stabilized in 2.529942ms
addons_test.go:897: volcano-admission stabilized in 3.082429ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-l9vcz" [c8ba104f-aa5e-45c1-b1f8-ebc4b5621975] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.007087946s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-nmqxt" [89141f81-1a4b-4c25-91ec-263b97407cac] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.003899597s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-477ds" [3dccf1a4-3dda-4183-ad64-9119c3665ade] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.005253986s
addons_test.go:924: (dbg) Run:  kubectl --context addons-659000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-659000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-659000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5f61d5f9-4d15-405b-9c52-2bf5cbc900da] Pending
helpers_test.go:344: "test-job-nginx-0" [5f61d5f9-4d15-405b-9c52-2bf5cbc900da] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5f61d5f9-4d15-405b-9c52-2bf5cbc900da] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 10.003568828s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-amd64 -p addons-659000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-amd64 -p addons-659000 addons disable volcano --alsologtostderr -v=1: (10.038462939s)
--- PASS: TestAddons/parallel/Volcano (35.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-659000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-659000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-659000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-659000: (10.940399942s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-659000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-659000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-659000
--- PASS: TestAddons/StoppedEnableDisable (11.48s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.76s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.76s)

                                                
                                    
x
+
TestErrorSpam/setup (20.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-529000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-529000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 --driver=docker : (20.763023686s)
--- PASS: TestErrorSpam/setup (20.76s)

                                                
                                    
x
+
TestErrorSpam/start (2.13s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 start --dry-run
--- PASS: TestErrorSpam/start (2.13s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 pause
--- PASS: TestErrorSpam/pause (1.40s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (2.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 stop: (1.831598466s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-529000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-529000 stop
--- PASS: TestErrorSpam/stop (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19302-1453/.minikube/files/etc/test/nested/copy/1993/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-258000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-258000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m14.83142611s)
--- PASS: TestFunctional/serial/StartWithProxy (74.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-258000 --alsologtostderr -v=8
E0718 20:34:32.586869    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:32.593941    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:32.604093    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:32.625048    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:32.666111    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:32.746307    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:32.906936    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:33.227474    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:33.869612    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:35.151262    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:34:37.712697    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-258000 --alsologtostderr -v=8: (33.986316733s)
functional_test.go:659: soft start took 33.986775489s for "functional-258000" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-258000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 cache add registry.k8s.io/pause:3.1
E0718 20:34:42.832797    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-258000 cache add registry.k8s.io/pause:3.1: (1.076348303s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-258000 cache add registry.k8s.io/pause:3.3: (1.093145562s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3409593273/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 cache add minikube-local-cache-test:functional-258000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 cache delete minikube-local-cache-test:functional-258000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-258000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (241.806224ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 kubectl -- --context functional-258000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-258000 kubectl -- --context functional-258000 get pods: (1.138913245s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-258000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-258000 get pods: (1.481695067s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.48s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-258000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0718 20:34:53.073013    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
E0718 20:35:13.553049    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-258000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.019663928s)
functional_test.go:757: restart took 42.019783409s for "functional-258000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-258000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-258000 logs: (3.006791059s)
--- PASS: TestFunctional/serial/LogsCmd (3.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd704268275/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-258000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd704268275/001/logs.txt: (2.777655029s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-258000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-258000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-258000: exit status 115 (371.574384ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32603 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-258000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 config get cpus: exit status 14 (60.485632ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 config get cpus: exit status 14 (57.497633ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-258000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-258000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 5236: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-258000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-258000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (718.171238ms)

                                                
                                                
-- stdout --
	* [functional-258000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:36:25.204284    5178 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:36:25.204455    5178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:25.204460    5178 out.go:304] Setting ErrFile to fd 2...
	I0718 20:36:25.204464    5178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:25.204617    5178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 20:36:25.206037    5178 out.go:298] Setting JSON to false
	I0718 20:36:25.228457    5178 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2158,"bootTime":1721358027,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 20:36:25.228552    5178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:36:25.250493    5178 out.go:177] * [functional-258000] minikube v1.33.1 on Darwin 14.5
	I0718 20:36:25.292087    5178 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:36:25.292122    5178 notify.go:220] Checking for updates...
	I0718 20:36:25.350042    5178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 20:36:25.392282    5178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:36:25.466065    5178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:36:25.508313    5178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	I0718 20:36:25.529203    5178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:36:25.551123    5178 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:25.551878    5178 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:36:25.575908    5178 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 20:36:25.576103    5178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 20:36:25.662938    5178 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:75 SystemTime:2024-07-19 03:36:25.654387506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 20:36:25.705678    5178 out.go:177] * Using the docker driver based on existing profile
	I0718 20:36:25.726793    5178 start.go:297] selected driver: docker
	I0718 20:36:25.726819    5178 start.go:901] validating driver "docker" against &{Name:functional-258000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-258000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:25.726957    5178 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:36:25.751829    5178 out.go:177] 
	W0718 20:36:25.772714    5178 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0718 20:36:25.793846    5178 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-258000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-258000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-258000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (613.158221ms)

                                                
                                                
-- stdout --
	* [functional-258000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:36:24.585815    5162 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:36:24.586079    5162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:24.586085    5162 out.go:304] Setting ErrFile to fd 2...
	I0718 20:36:24.586089    5162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:36:24.586296    5162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 20:36:24.588174    5162 out.go:298] Setting JSON to false
	I0718 20:36:24.611532    5162 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2157,"bootTime":1721358027,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0718 20:36:24.611631    5162 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:36:24.641130    5162 out.go:177] * [functional-258000] minikube v1.33.1 sur Darwin 14.5
	I0718 20:36:24.683989    5162 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:36:24.684065    5162 notify.go:220] Checking for updates...
	I0718 20:36:24.725933    5162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
	I0718 20:36:24.746803    5162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:36:24.768024    5162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:36:24.789074    5162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube
	I0718 20:36:24.809885    5162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:36:24.831691    5162 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:36:24.832415    5162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:36:24.856066    5162 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0718 20:36:24.856228    5162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0718 20:36:24.937960    5162 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:75 SystemTime:2024-07-19 03:36:24.92934219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768057344 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-
g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-des
ktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugi
ns/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0718 20:36:24.980218    5162 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0718 20:36:25.022612    5162 start.go:297] selected driver: docker
	I0718 20:36:25.022639    5162 start.go:901] validating driver "docker" against &{Name:functional-258000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-258000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:36:25.022770    5162 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:36:25.065584    5162 out.go:177] 
	W0718 20:36:25.086531    5162 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0718 20:36:25.107494    5162 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e83b7a57-81a1-45f4-8a25-d28e9d3cdd4f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004723456s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-258000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-258000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-258000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-258000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [50c225ea-6970-4b50-a9a7-4a9f4a70bc01] Pending
helpers_test.go:344: "sp-pod" [50c225ea-6970-4b50-a9a7-4a9f4a70bc01] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0718 20:35:54.512445    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [50c225ea-6970-4b50-a9a7-4a9f4a70bc01] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005743322s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-258000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-258000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-258000 delete -f testdata/storage-provisioner/pod.yaml: (1.23544128s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-258000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [602374a6-308b-49e0-aae7-05a6a3dab643] Pending
helpers_test.go:344: "sp-pod" [602374a6-308b-49e0-aae7-05a6a3dab643] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [602374a6-308b-49e0-aae7-05a6a3dab643] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00507198s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-258000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh -n functional-258000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 cp functional-258000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd3684220189/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh -n functional-258000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh -n functional-258000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-258000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-qtmlf" [c85fa8f6-25fa-4f44-a515-634ceaad2a4f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-qtmlf" [c85fa8f6-25fa-4f44-a515-634ceaad2a4f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.005608131s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-258000 exec mysql-64454c8b5c-qtmlf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-258000 exec mysql-64454c8b5c-qtmlf -- mysql -ppassword -e "show databases;": exit status 1 (151.493647ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-258000 exec mysql-64454c8b5c-qtmlf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-258000 exec mysql-64454c8b5c-qtmlf -- mysql -ppassword -e "show databases;": exit status 1 (115.987397ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-258000 exec mysql-64454c8b5c-qtmlf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-258000 exec mysql-64454c8b5c-qtmlf -- mysql -ppassword -e "show databases;": exit status 1 (107.423764ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-258000 exec mysql-64454c8b5c-qtmlf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.62s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1993/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo cat /etc/test/nested/copy/1993/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1993.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo cat /etc/ssl/certs/1993.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1993.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo cat /usr/share/ca-certificates/1993.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/19932.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo cat /etc/ssl/certs/19932.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/19932.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo cat /usr/share/ca-certificates/19932.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-258000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 ssh "sudo systemctl is-active crio": exit status 1 (245.271472ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-258000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-258000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-258000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-258000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4837: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-258000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-258000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6edd70b5-e005-42b6-a0b2-d9c7459c895f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6edd70b5-e005-42b6-a0b2-d9c7459c895f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003525852s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-258000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-258000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4867: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-258000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-258000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-xw5pr" [639f9e94-fdae-4a08-b1cf-891a0065d741] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-xw5pr" [639f9e94-fdae-4a08-b1cf-891a0065d741] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004542522s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "280.367339ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "80.445053ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "278.6553ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "83.608796ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2073103102/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721360173319611000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2073103102/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721360173319611000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2073103102/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721360173319611000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2073103102/001/test-1721360173319611000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (265.62563ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 03:36 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 03:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 03:36 test-1721360173319611000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh cat /mount-9p/test-1721360173319611000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-258000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [42682449-e500-4cce-818a-caba6d78afb6] Pending
helpers_test.go:344: "busybox-mount" [42682449-e500-4cce-818a-caba6d78afb6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [42682449-e500-4cce-818a-caba6d78afb6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [42682449-e500-4cce-818a-caba6d78afb6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004834897s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-258000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2073103102/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 service list -o json
functional_test.go:1490: Took "611.04491ms" to run "out/minikube-darwin-amd64 -p functional-258000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 service --namespace=default --https --url hello-node: signal: killed (15.00280116s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50411

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:50411
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2321507362/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.878032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2321507362/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 ssh "sudo umount -f /mount-9p": exit status 1 (222.625728ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-258000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2321507362/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2227807527/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2227807527/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2227807527/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T" /mount1: exit status 1 (325.961632ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-258000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2227807527/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2227807527/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-258000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2227807527/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 service hello-node --url --format={{.IP}}
2024/07/18 20:36:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 service hello-node --url --format={{.IP}}: signal: killed (15.00293698s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-258000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-258000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-258000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-258000 image ls --format short --alsologtostderr:
I0718 20:37:00.410904    5456 out.go:291] Setting OutFile to fd 1 ...
I0718 20:37:00.411205    5456 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:00.411211    5456 out.go:304] Setting ErrFile to fd 2...
I0718 20:37:00.411216    5456 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:00.411404    5456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
I0718 20:37:00.412018    5456 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:00.412127    5456 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:00.412535    5456 cli_runner.go:164] Run: docker container inspect functional-258000 --format={{.State.Status}}
I0718 20:37:00.432813    5456 ssh_runner.go:195] Run: systemctl --version
I0718 20:37:00.432890    5456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258000
I0718 20:37:00.453674    5456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50155 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1453/.minikube/machines/functional-258000/id_rsa Username:docker}
I0718 20:37:00.537960    5456 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-258000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-258000 | c2f4889549532 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| docker.io/kicbase/echo-server               | functional-258000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-258000 image ls --format table --alsologtostderr:
I0718 20:37:01.104196    5468 out.go:291] Setting OutFile to fd 1 ...
I0718 20:37:01.104481    5468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:01.104487    5468 out.go:304] Setting ErrFile to fd 2...
I0718 20:37:01.104491    5468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:01.104683    5468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
I0718 20:37:01.105419    5468 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:01.105528    5468 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:01.105946    5468 cli_runner.go:164] Run: docker container inspect functional-258000 --format={{.State.Status}}
I0718 20:37:01.126246    5468 ssh_runner.go:195] Run: systemctl --version
I0718 20:37:01.126317    5468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258000
I0718 20:37:01.145894    5468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50155 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1453/.minikube/machines/functional-258000/id_rsa Username:docker}
I0718 20:37:01.232700    5468 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-258000 image ls --format json --alsologtostderr:
[{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691
a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"]
,"size":"188000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"c2f48895495323d6c48be61195d7b66c05999f291a8d6013bca6c2d3b7c3e292","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-258000"],"size":"30"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-258000"],"size":"4940000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size"
:"43800000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-258000 image ls --format json --alsologtostderr:
I0718 20:37:00.874148    5464 out.go:291] Setting OutFile to fd 1 ...
I0718 20:37:00.874351    5464 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:00.874356    5464 out.go:304] Setting ErrFile to fd 2...
I0718 20:37:00.874360    5464 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:00.874544    5464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
I0718 20:37:00.875201    5464 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:00.875309    5464 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:00.875717    5464 cli_runner.go:164] Run: docker container inspect functional-258000 --format={{.State.Status}}
I0718 20:37:00.896269    5464 ssh_runner.go:195] Run: systemctl --version
I0718 20:37:00.896345    5464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258000
I0718 20:37:00.915710    5464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50155 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1453/.minikube/machines/functional-258000/id_rsa Username:docker}
I0718 20:37:00.997777    5464 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-258000 image ls --format yaml --alsologtostderr:
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-258000
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c2f48895495323d6c48be61195d7b66c05999f291a8d6013bca6c2d3b7c3e292
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-258000
size: "30"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-258000 image ls --format yaml --alsologtostderr:
I0718 20:37:00.643629    5460 out.go:291] Setting OutFile to fd 1 ...
I0718 20:37:00.643931    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:00.643937    5460 out.go:304] Setting ErrFile to fd 2...
I0718 20:37:00.643941    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:00.644124    5460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
I0718 20:37:00.644738    5460 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:00.644842    5460 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:00.645233    5460 cli_runner.go:164] Run: docker container inspect functional-258000 --format={{.State.Status}}
I0718 20:37:00.666011    5460 ssh_runner.go:195] Run: systemctl --version
I0718 20:37:00.666091    5460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258000
I0718 20:37:00.686339    5460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50155 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1453/.minikube/machines/functional-258000/id_rsa Username:docker}
I0718 20:37:00.767630    5460 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 ssh pgrep buildkitd: exit status 1 (233.270046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image build -t localhost/my-image:functional-258000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-258000 image build -t localhost/my-image:functional-258000 testdata/build --alsologtostderr: (2.329846129s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-258000 image build -t localhost/my-image:functional-258000 testdata/build --alsologtostderr:
I0718 20:37:01.601175    5478 out.go:291] Setting OutFile to fd 1 ...
I0718 20:37:01.601513    5478 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:01.601519    5478 out.go:304] Setting ErrFile to fd 2...
I0718 20:37:01.601524    5478 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:37:01.601736    5478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
I0718 20:37:01.602348    5478 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:01.603687    5478 config.go:182] Loaded profile config "functional-258000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:37:01.604146    5478 cli_runner.go:164] Run: docker container inspect functional-258000 --format={{.State.Status}}
I0718 20:37:01.624705    5478 ssh_runner.go:195] Run: systemctl --version
I0718 20:37:01.624780    5478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258000
I0718 20:37:01.646005    5478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50155 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1453/.minikube/machines/functional-258000/id_rsa Username:docker}
I0718 20:37:01.729489    5478 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2200228705.tar
I0718 20:37:01.729568    5478 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0718 20:37:01.739903    5478 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2200228705.tar
I0718 20:37:01.744039    5478 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2200228705.tar: stat -c "%s %y" /var/lib/minikube/build/build.2200228705.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2200228705.tar': No such file or directory
I0718 20:37:01.744068    5478 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2200228705.tar --> /var/lib/minikube/build/build.2200228705.tar (3072 bytes)
I0718 20:37:01.767569    5478 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2200228705
I0718 20:37:01.777481    5478 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2200228705 -xf /var/lib/minikube/build/build.2200228705.tar
I0718 20:37:01.787262    5478 docker.go:360] Building image: /var/lib/minikube/build/build.2200228705
I0718 20:37:01.787363    5478 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-258000 /var/lib/minikube/build/build.2200228705
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:cc6d62545ae3668997c0a8b644df023a9c541a4935887d39536616df61d4e3a8 done
#8 naming to localhost/my-image:functional-258000 done
#8 DONE 0.0s
I0718 20:37:03.814289    5478 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-258000 /var/lib/minikube/build/build.2200228705: (2.026945077s)
I0718 20:37:03.814383    5478 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2200228705
I0718 20:37:03.824734    5478 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2200228705.tar
I0718 20:37:03.834385    5478 build_images.go:217] Built localhost/my-image:functional-258000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2200228705.tar
I0718 20:37:03.834424    5478 build_images.go:133] succeeded building to: functional-258000
I0718 20:37:03.834432    5478 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.782694649s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-258000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image load --daemon docker.io/kicbase/echo-server:functional-258000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image load --daemon docker.io/kicbase/echo-server:functional-258000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-258000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image load --daemon docker.io/kicbase/echo-server:functional-258000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image save docker.io/kicbase/echo-server:functional-258000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image rm docker.io/kicbase/echo-server:functional-258000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-258000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 image save --daemon docker.io/kicbase/echo-server:functional-258000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-258000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-258000 service hello-node --url: signal: killed (15.002805775s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50549

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:50549
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-258000 docker-env) && out/minikube-darwin-amd64 status -p functional-258000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-258000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-258000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-258000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-258000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-258000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (102.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-105000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-105000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m42.14413127s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (102.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-105000 -- rollout status deployment/busybox: (4.42134774s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-424cx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-gzbs7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-t97rq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-424cx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-gzbs7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-t97rq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-424cx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-gzbs7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-t97rq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-424cx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-424cx -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-gzbs7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-gzbs7 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-t97rq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-105000 -- exec busybox-fc5497c4f-t97rq -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-105000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-105000 -v=7 --alsologtostderr: (19.245182321s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-105000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp testdata/cp-test.txt ha-105000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile597112528/001/cp-test_ha-105000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000:/home/docker/cp-test.txt ha-105000-m02:/home/docker/cp-test_ha-105000_ha-105000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000 "sudo cat /home/docker/cp-test.txt"
E0718 20:39:32.581014    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m02 "sudo cat /home/docker/cp-test_ha-105000_ha-105000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000:/home/docker/cp-test.txt ha-105000-m03:/home/docker/cp-test_ha-105000_ha-105000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m03 "sudo cat /home/docker/cp-test_ha-105000_ha-105000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000:/home/docker/cp-test.txt ha-105000-m04:/home/docker/cp-test_ha-105000_ha-105000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m04 "sudo cat /home/docker/cp-test_ha-105000_ha-105000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp testdata/cp-test.txt ha-105000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile597112528/001/cp-test_ha-105000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m02:/home/docker/cp-test.txt ha-105000:/home/docker/cp-test_ha-105000-m02_ha-105000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000 "sudo cat /home/docker/cp-test_ha-105000-m02_ha-105000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m02:/home/docker/cp-test.txt ha-105000-m03:/home/docker/cp-test_ha-105000-m02_ha-105000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m03 "sudo cat /home/docker/cp-test_ha-105000-m02_ha-105000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m02:/home/docker/cp-test.txt ha-105000-m04:/home/docker/cp-test_ha-105000-m02_ha-105000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m04 "sudo cat /home/docker/cp-test_ha-105000-m02_ha-105000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp testdata/cp-test.txt ha-105000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile597112528/001/cp-test_ha-105000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m03:/home/docker/cp-test.txt ha-105000:/home/docker/cp-test_ha-105000-m03_ha-105000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000 "sudo cat /home/docker/cp-test_ha-105000-m03_ha-105000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m03:/home/docker/cp-test.txt ha-105000-m02:/home/docker/cp-test_ha-105000-m03_ha-105000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m02 "sudo cat /home/docker/cp-test_ha-105000-m03_ha-105000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m03:/home/docker/cp-test.txt ha-105000-m04:/home/docker/cp-test_ha-105000-m03_ha-105000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m04 "sudo cat /home/docker/cp-test_ha-105000-m03_ha-105000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp testdata/cp-test.txt ha-105000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile597112528/001/cp-test_ha-105000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m04:/home/docker/cp-test.txt ha-105000:/home/docker/cp-test_ha-105000-m04_ha-105000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000 "sudo cat /home/docker/cp-test_ha-105000-m04_ha-105000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m04:/home/docker/cp-test.txt ha-105000-m02:/home/docker/cp-test_ha-105000-m04_ha-105000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m02 "sudo cat /home/docker/cp-test_ha-105000-m04_ha-105000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 cp ha-105000-m04:/home/docker/cp-test.txt ha-105000-m03:/home/docker/cp-test_ha-105000-m04_ha-105000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 ssh -n ha-105000-m03 "sudo cat /home/docker/cp-test_ha-105000-m04_ha-105000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-105000 node stop m02 -v=7 --alsologtostderr: (10.689346764s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr: exit status 7 (623.750496ms)

                                                
                                                
-- stdout --
	ha-105000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-105000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:39:56.931435    6317 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:39:56.931702    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:39:56.931709    6317 out.go:304] Setting ErrFile to fd 2...
	I0718 20:39:56.931712    6317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:39:56.931912    6317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 20:39:56.932082    6317 out.go:298] Setting JSON to false
	I0718 20:39:56.932104    6317 mustload.go:65] Loading cluster: ha-105000
	I0718 20:39:56.932150    6317 notify.go:220] Checking for updates...
	I0718 20:39:56.932405    6317 config.go:182] Loaded profile config "ha-105000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:39:56.932420    6317 status.go:255] checking status of ha-105000 ...
	I0718 20:39:56.932840    6317 cli_runner.go:164] Run: docker container inspect ha-105000 --format={{.State.Status}}
	I0718 20:39:56.951439    6317 status.go:330] ha-105000 host status = "Running" (err=<nil>)
	I0718 20:39:56.951472    6317 host.go:66] Checking if "ha-105000" exists ...
	I0718 20:39:56.951747    6317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-105000
	I0718 20:39:56.970603    6317 host.go:66] Checking if "ha-105000" exists ...
	I0718 20:39:56.970886    6317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:39:56.970947    6317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-105000
	I0718 20:39:56.989153    6317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50619 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1453/.minikube/machines/ha-105000/id_rsa Username:docker}
	I0718 20:39:57.070591    6317 ssh_runner.go:195] Run: systemctl --version
	I0718 20:39:57.075160    6317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:39:57.085612    6317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-105000
	I0718 20:39:57.104519    6317 kubeconfig.go:125] found "ha-105000" server: "https://127.0.0.1:50623"
	I0718 20:39:57.104569    6317 api_server.go:166] Checking apiserver status ...
	I0718 20:39:57.104623    6317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:39:57.115832    6317 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2386/cgroup
	W0718 20:39:57.124869    6317 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2386/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:39:57.124937    6317 ssh_runner.go:195] Run: ls
	I0718 20:39:57.128783    6317 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50623/healthz ...
	I0718 20:39:57.132615    6317 api_server.go:279] https://127.0.0.1:50623/healthz returned 200:
	ok
	I0718 20:39:57.132627    6317 status.go:422] ha-105000 apiserver status = Running (err=<nil>)
	I0718 20:39:57.132640    6317 status.go:257] ha-105000 status: &{Name:ha-105000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:39:57.132652    6317 status.go:255] checking status of ha-105000-m02 ...
	I0718 20:39:57.132877    6317 cli_runner.go:164] Run: docker container inspect ha-105000-m02 --format={{.State.Status}}
	I0718 20:39:57.151411    6317 status.go:330] ha-105000-m02 host status = "Stopped" (err=<nil>)
	I0718 20:39:57.151447    6317 status.go:343] host is not running, skipping remaining checks
	I0718 20:39:57.151460    6317 status.go:257] ha-105000-m02 status: &{Name:ha-105000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:39:57.151478    6317 status.go:255] checking status of ha-105000-m03 ...
	I0718 20:39:57.151778    6317 cli_runner.go:164] Run: docker container inspect ha-105000-m03 --format={{.State.Status}}
	I0718 20:39:57.169986    6317 status.go:330] ha-105000-m03 host status = "Running" (err=<nil>)
	I0718 20:39:57.170013    6317 host.go:66] Checking if "ha-105000-m03" exists ...
	I0718 20:39:57.170281    6317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-105000-m03
	I0718 20:39:57.188608    6317 host.go:66] Checking if "ha-105000-m03" exists ...
	I0718 20:39:57.188896    6317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:39:57.188952    6317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-105000-m03
	I0718 20:39:57.207235    6317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50726 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1453/.minikube/machines/ha-105000-m03/id_rsa Username:docker}
	I0718 20:39:57.291289    6317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:39:57.301793    6317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-105000
	I0718 20:39:57.320226    6317 kubeconfig.go:125] found "ha-105000" server: "https://127.0.0.1:50623"
	I0718 20:39:57.320249    6317 api_server.go:166] Checking apiserver status ...
	I0718 20:39:57.320290    6317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:39:57.330976    6317 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2301/cgroup
	W0718 20:39:57.339923    6317 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2301/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:39:57.339997    6317 ssh_runner.go:195] Run: ls
	I0718 20:39:57.344031    6317 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50623/healthz ...
	I0718 20:39:57.348873    6317 api_server.go:279] https://127.0.0.1:50623/healthz returned 200:
	ok
	I0718 20:39:57.348886    6317 status.go:422] ha-105000-m03 apiserver status = Running (err=<nil>)
	I0718 20:39:57.348896    6317 status.go:257] ha-105000-m03 status: &{Name:ha-105000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:39:57.348907    6317 status.go:255] checking status of ha-105000-m04 ...
	I0718 20:39:57.349192    6317 cli_runner.go:164] Run: docker container inspect ha-105000-m04 --format={{.State.Status}}
	I0718 20:39:57.368290    6317 status.go:330] ha-105000-m04 host status = "Running" (err=<nil>)
	I0718 20:39:57.368315    6317 host.go:66] Checking if "ha-105000-m04" exists ...
	I0718 20:39:57.368583    6317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-105000-m04
	I0718 20:39:57.386928    6317 host.go:66] Checking if "ha-105000-m04" exists ...
	I0718 20:39:57.387187    6317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:39:57.387238    6317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-105000-m04
	I0718 20:39:57.405932    6317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50848 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1453/.minikube/machines/ha-105000-m04/id_rsa Username:docker}
	I0718 20:39:57.487834    6317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:39:57.498686    6317 status.go:257] ha-105000-m04 status: &{Name:ha-105000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 node start m02 -v=7 --alsologtostderr
E0718 20:40:00.270862    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-105000 node start m02 -v=7 --alsologtostderr: (35.942264591s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (285.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-105000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-105000 -v=7 --alsologtostderr
E0718 20:40:44.203988    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:44.209547    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:44.220178    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:44.240308    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:44.280877    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:44.361122    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:44.521877    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:44.842660    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:45.482918    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:46.763027    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:49.323799    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:40:54.444542    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:41:04.685113    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-105000 -v=7 --alsologtostderr: (33.730570529s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-105000 --wait=true -v=7 --alsologtostderr
E0718 20:41:25.165302    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:42:06.125530    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:43:28.045390    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
E0718 20:44:32.575337    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-105000 --wait=true -v=7 --alsologtostderr: (4m11.785909114s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-105000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (285.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-105000 node delete m03 -v=7 --alsologtostderr: (9.647450321s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 stop -v=7 --alsologtostderr
E0718 20:45:44.198189    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-105000 stop -v=7 --alsologtostderr: (32.226529281s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr: exit status 7 (111.433675ms)

                                                
                                                
-- stdout --
	ha-105000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-105000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-105000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:46:04.255435    6823 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:46:04.255712    6823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:46:04.255717    6823 out.go:304] Setting ErrFile to fd 2...
	I0718 20:46:04.255721    6823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:46:04.255894    6823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1453/.minikube/bin
	I0718 20:46:04.256075    6823 out.go:298] Setting JSON to false
	I0718 20:46:04.256097    6823 mustload.go:65] Loading cluster: ha-105000
	I0718 20:46:04.256136    6823 notify.go:220] Checking for updates...
	I0718 20:46:04.256403    6823 config.go:182] Loaded profile config "ha-105000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:46:04.256418    6823 status.go:255] checking status of ha-105000 ...
	I0718 20:46:04.256832    6823 cli_runner.go:164] Run: docker container inspect ha-105000 --format={{.State.Status}}
	I0718 20:46:04.275120    6823 status.go:330] ha-105000 host status = "Stopped" (err=<nil>)
	I0718 20:46:04.275165    6823 status.go:343] host is not running, skipping remaining checks
	I0718 20:46:04.275172    6823 status.go:257] ha-105000 status: &{Name:ha-105000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:46:04.275198    6823 status.go:255] checking status of ha-105000-m02 ...
	I0718 20:46:04.275448    6823 cli_runner.go:164] Run: docker container inspect ha-105000-m02 --format={{.State.Status}}
	I0718 20:46:04.293306    6823 status.go:330] ha-105000-m02 host status = "Stopped" (err=<nil>)
	I0718 20:46:04.293328    6823 status.go:343] host is not running, skipping remaining checks
	I0718 20:46:04.293338    6823 status.go:257] ha-105000-m02 status: &{Name:ha-105000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:46:04.293351    6823 status.go:255] checking status of ha-105000-m04 ...
	I0718 20:46:04.293616    6823 cli_runner.go:164] Run: docker container inspect ha-105000-m04 --format={{.State.Status}}
	I0718 20:46:04.311608    6823 status.go:330] ha-105000-m04 host status = "Stopped" (err=<nil>)
	I0718 20:46:04.311635    6823 status.go:343] host is not running, skipping remaining checks
	I0718 20:46:04.311642    6823 status.go:257] ha-105000-m04 status: &{Name:ha-105000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-105000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0718 20:46:11.897033    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-105000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m20.576028879s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (33.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-105000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-105000 --control-plane -v=7 --alsologtostderr: (32.92181904s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-105000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (33.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-511000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-511000 --driver=docker : (22.23698471s)
--- PASS: TestImageBuild/serial/Setup (22.24s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-511000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-511000: (1.533119767s)
--- PASS: TestImageBuild/serial/NormalBuild (1.53s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-511000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-511000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-511000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-325000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-325000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (38.49787904s)
--- PASS: TestJSONOutput/start/Command (38.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-325000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-325000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-325000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-325000 --output=json --user=testUser: (5.658022351s)
--- PASS: TestJSONOutput/stop/Command (5.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.69s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-890000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-890000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (465.181543ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c123ed00-47ec-4914-a481-5ff7ded91f73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-890000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b820e13-647c-45b0-a1d1-5c9377c267ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"c6dd6284-ffc8-43bb-8d1d-f48407bc8316","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig"}}
	{"specversion":"1.0","id":"f9c766b0-b571-4926-a88c-9feec128d89b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"9034cbed-bbdd-4096-8eb4-b2729c9ada8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"33bf13ca-8bf2-47c8-801b-ab585e4e0e51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1453/.minikube"}}
	{"specversion":"1.0","id":"5d9c80c1-67af-442c-ae50-7d8b18e4e5c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1511cbda-b811-452c-b218-d00e69a80099","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-890000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-890000
--- PASS: TestErrorJSONOutput (0.69s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-114000 --network=
E0718 20:49:32.597275    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-114000 --network=: (21.062043436s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-114000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-114000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-114000: (1.798725954s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.88s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-582000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-582000 --network=bridge: (20.637637004s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-582000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-582000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-582000: (1.687634156s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.34s)

                                                
                                    
x
+
TestKicExistingNetwork (21.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-749000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-749000 --network=existing-network: (19.835150266s)
helpers_test.go:175: Cleaning up "existing-network-749000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-749000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-749000: (1.89884488s)
--- PASS: TestKicExistingNetwork (21.91s)

                                                
                                    
x
+
TestKicCustomSubnet (22.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-643000 --subnet=192.168.60.0/24
E0718 20:50:44.221636    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/functional-258000/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-643000 --subnet=192.168.60.0/24: (20.590484762s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-643000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-643000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-643000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-643000: (1.929691862s)
--- PASS: TestKicCustomSubnet (22.54s)

                                                
                                    
x
+
TestKicStaticIP (22.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-305000 --static-ip=192.168.200.200
E0718 20:50:55.646751    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-305000 --static-ip=192.168.200.200: (20.499236188s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-305000 ip
helpers_test.go:175: Cleaning up "static-ip-305000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-305000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-305000: (2.023386841s)
--- PASS: TestKicStaticIP (22.69s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (47.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-137000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-137000 --driver=docker : (21.315288453s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-139000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-139000 --driver=docker : (20.902595001s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-137000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-139000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-139000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-139000: (1.79640592s)
helpers_test.go:175: Cleaning up "first-137000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-137000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-137000: (1.976232727s)
--- PASS: TestMinikubeProfile (47.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-884000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-884000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.080259107s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-884000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-901000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-901000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.683413833s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-901000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-884000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-884000 --alsologtostderr -v=5: (1.650974267s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-901000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-901000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-901000: (1.41355666s)
--- PASS: TestMountStart/serial/Stop (1.41s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-901000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-901000: (7.680861355s)
--- PASS: TestMountStart/serial/RestartStopped (8.68s)

                                                
                                    
x
+
TestPreload (91.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-316000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-316000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (58.054085101s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-316000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-316000 image pull gcr.io/k8s-minikube/busybox: (1.38795326s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-316000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-316000: (10.769857738s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-316000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0718 21:39:32.668640    1993 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1453/.minikube/profiles/addons-659000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-316000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (19.135826721s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-316000 image list
helpers_test.go:175: Cleaning up "test-preload-316000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-316000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-316000: (2.126878198s)
--- PASS: TestPreload (91.73s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.3s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3881899834/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3881899834/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3881899834/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3881899834/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.30s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-1453/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2287784410/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2287784410/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2287784410/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2287784410/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.38s)

                                                
                                    

Test skip (19/217)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 11.888148ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-g7448" [e9178bf8-951b-42e1-aead-76c005e1e53c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00394055s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9hkxz" [27e4334e-03e5-446d-90fe-177f527ff774] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003650125s
addons_test.go:342: (dbg) Run:  kubectl --context addons-659000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-659000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-659000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.705004713s)
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (15.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-659000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-659000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-659000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [afcb5a48-b02c-4ed5-9f05-433b4f57d9f8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [afcb5a48-b02c-4ed5-9f05-433b4f57d9f8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.004463214s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-659000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (15.87s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-258000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-258000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-89s9f" [6b2fb2da-df65-47ee-a44a-9975b08fca60] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-89s9f" [6b2fb2da-df65-47ee-a44a-9975b08fca60] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003678944s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard