Test Report: Docker_macOS 18644

                    
                      382efc9ec0890000466ab6258d7a89af3764444c:2024-04-15:34035
                    
                

Test fail (22/213)

x
+
TestOffline (754.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-615000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-615000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m34.03606894s)

                                                
                                                
-- stdout --
	* [offline-docker-615000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-615000" primary control-plane node in "offline-docker-615000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-615000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 06:00:21.948154   32777 out.go:291] Setting OutFile to fd 1 ...
	I0415 06:00:21.948410   32777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 06:00:21.948416   32777 out.go:304] Setting ErrFile to fd 2...
	I0415 06:00:21.948420   32777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 06:00:21.948590   32777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 06:00:21.950051   32777 out.go:298] Setting JSON to false
	I0415 06:00:21.973108   32777 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10791,"bootTime":1713175230,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 06:00:21.973203   32777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 06:00:21.995474   32777 out.go:177] * [offline-docker-615000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 06:00:22.016172   32777 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 06:00:22.016179   32777 notify.go:220] Checking for updates...
	I0415 06:00:22.058156   32777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 06:00:22.079178   32777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 06:00:22.099964   32777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 06:00:22.121179   32777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	I0415 06:00:22.142153   32777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 06:00:22.163155   32777 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 06:00:22.218194   32777 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 06:00:22.218355   32777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 06:00:22.331645   32777 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:false NGoroutines:183 SystemTime:2024-04-15 13:00:22.321175765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 06:00:22.353440   32777 out.go:177] * Using the docker driver based on user configuration
	I0415 06:00:22.375365   32777 start.go:297] selected driver: docker
	I0415 06:00:22.375402   32777 start.go:901] validating driver "docker" against <nil>
	I0415 06:00:22.375420   32777 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 06:00:22.380024   32777 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 06:00:22.489277   32777 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:false NGoroutines:183 SystemTime:2024-04-15 13:00:22.478043891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 06:00:22.489451   32777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 06:00:22.489649   32777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 06:00:22.511031   32777 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 06:00:22.532475   32777 cni.go:84] Creating CNI manager for ""
	I0415 06:00:22.532534   32777 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 06:00:22.532550   32777 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 06:00:22.532665   32777 start.go:340] cluster config:
	{Name:offline-docker-615000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-615000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 06:00:22.554256   32777 out.go:177] * Starting "offline-docker-615000" primary control-plane node in "offline-docker-615000" cluster
	I0415 06:00:22.617305   32777 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 06:00:22.659284   32777 out.go:177] * Pulling base image v0.0.43-1712854342-18621 ...
	I0415 06:00:22.701157   32777 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:00:22.701208   32777 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 06:00:22.701258   32777 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 06:00:22.701278   32777 cache.go:56] Caching tarball of preloaded images
	I0415 06:00:22.701525   32777 preload.go:173] Found /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 06:00:22.701548   32777 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 06:00:22.703193   32777 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/offline-docker-615000/config.json ...
	I0415 06:00:22.703290   32777 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/offline-docker-615000/config.json: {Name:mk8c6fdda29169eaa5a4f41a6c4fa43db940ce69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 06:00:22.754943   32777 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon, skipping pull
	I0415 06:00:22.754997   32777 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in daemon, skipping load
	I0415 06:00:22.755019   32777 cache.go:194] Successfully downloaded all kic artifacts
	I0415 06:00:22.755065   32777 start.go:360] acquireMachinesLock for offline-docker-615000: {Name:mk29015139e02b21b64a01194a230b324e1499bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 06:00:22.755225   32777 start.go:364] duration metric: took 148.811µs to acquireMachinesLock for "offline-docker-615000"
	I0415 06:00:22.755254   32777 start.go:93] Provisioning new machine with config: &{Name:offline-docker-615000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-615000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 06:00:22.755398   32777 start.go:125] createHost starting for "" (driver="docker")
	I0415 06:00:22.799104   32777 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 06:00:22.799509   32777 start.go:159] libmachine.API.Create for "offline-docker-615000" (driver="docker")
	I0415 06:00:22.799557   32777 client.go:168] LocalClient.Create starting
	I0415 06:00:22.799786   32777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 06:00:22.799894   32777 main.go:141] libmachine: Decoding PEM data...
	I0415 06:00:22.799928   32777 main.go:141] libmachine: Parsing certificate...
	I0415 06:00:22.800086   32777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 06:00:22.800175   32777 main.go:141] libmachine: Decoding PEM data...
	I0415 06:00:22.800198   32777 main.go:141] libmachine: Parsing certificate...
	I0415 06:00:22.822442   32777 cli_runner.go:164] Run: docker network inspect offline-docker-615000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 06:00:22.872918   32777 cli_runner.go:211] docker network inspect offline-docker-615000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 06:00:22.873029   32777 network_create.go:281] running [docker network inspect offline-docker-615000] to gather additional debugging logs...
	I0415 06:00:22.873051   32777 cli_runner.go:164] Run: docker network inspect offline-docker-615000
	W0415 06:00:22.922430   32777 cli_runner.go:211] docker network inspect offline-docker-615000 returned with exit code 1
	I0415 06:00:22.922456   32777 network_create.go:284] error running [docker network inspect offline-docker-615000]: docker network inspect offline-docker-615000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-615000 not found
	I0415 06:00:22.922476   32777 network_create.go:286] output of [docker network inspect offline-docker-615000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-615000 not found
	
	** /stderr **
	I0415 06:00:22.922610   32777 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:00:23.035480   32777 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:00:23.037404   32777 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:00:23.038163   32777 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0007b78d0}
	I0415 06:00:23.038196   32777 network_create.go:124] attempt to create docker network offline-docker-615000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 06:00:23.038329   32777 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-615000 offline-docker-615000
	I0415 06:00:23.163900   32777 network_create.go:108] docker network offline-docker-615000 192.168.67.0/24 created
	I0415 06:00:23.163951   32777 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-615000" container
	I0415 06:00:23.164123   32777 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 06:00:23.215205   32777 cli_runner.go:164] Run: docker volume create offline-docker-615000 --label name.minikube.sigs.k8s.io=offline-docker-615000 --label created_by.minikube.sigs.k8s.io=true
	I0415 06:00:23.266045   32777 oci.go:103] Successfully created a docker volume offline-docker-615000
	I0415 06:00:23.266163   32777 cli_runner.go:164] Run: docker run --rm --name offline-docker-615000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-615000 --entrypoint /usr/bin/test -v offline-docker-615000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 06:00:23.581486   32777 oci.go:107] Successfully prepared a docker volume offline-docker-615000
	I0415 06:00:23.581523   32777 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:00:23.581536   32777 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 06:00:23.581631   32777 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-615000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 06:06:22.786997   32777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:06:22.787130   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:06:22.840807   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:06:22.840940   32777 retry.go:31] will retry after 341.313709ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:23.183477   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:06:23.236743   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:06:23.236855   32777 retry.go:31] will retry after 557.935457ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:23.795399   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:06:23.846778   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:06:23.846884   32777 retry.go:31] will retry after 809.04975ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:24.658320   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:06:24.710507   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	W0415 06:06:24.710613   32777 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	
	W0415 06:06:24.710636   32777 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:24.710700   32777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:06:24.710755   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:06:24.759630   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:06:24.759721   32777 retry.go:31] will retry after 346.975772ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:25.109068   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:06:25.161710   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:06:25.161800   32777 retry.go:31] will retry after 384.996429ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:25.549188   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:06:25.603464   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:06:25.603560   32777 retry.go:31] will retry after 359.235567ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:25.963755   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:06:26.034963   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	W0415 06:06:26.035063   32777 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	
	W0415 06:06:26.035085   32777 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:26.035102   32777 start.go:128] duration metric: took 6m3.292792354s to createHost
	I0415 06:06:26.035108   32777 start.go:83] releasing machines lock for "offline-docker-615000", held for 6m3.292976129s
	W0415 06:06:26.035125   32777 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 06:06:26.035573   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:26.084212   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:26.084269   32777 delete.go:82] Unable to get host status for offline-docker-615000, assuming it has already been deleted: state: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	W0415 06:06:26.084339   32777 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 06:06:26.084352   32777 start.go:728] Will try again in 5 seconds ...
	I0415 06:06:31.087239   32777 start.go:360] acquireMachinesLock for offline-docker-615000: {Name:mk29015139e02b21b64a01194a230b324e1499bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 06:06:31.087558   32777 start.go:364] duration metric: took 188.029µs to acquireMachinesLock for "offline-docker-615000"
	I0415 06:06:31.087603   32777 start.go:96] Skipping create...Using existing machine configuration
	I0415 06:06:31.087621   32777 fix.go:54] fixHost starting: 
	I0415 06:06:31.088115   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:31.141392   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:31.141442   32777 fix.go:112] recreateIfNeeded on offline-docker-615000: state= err=unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:31.141459   32777 fix.go:117] machineExists: false. err=machine does not exist
	I0415 06:06:31.163232   32777 out.go:177] * docker "offline-docker-615000" container is missing, will recreate.
	I0415 06:06:31.185790   32777 delete.go:124] DEMOLISHING offline-docker-615000 ...
	I0415 06:06:31.186050   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:31.235662   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	W0415 06:06:31.235730   32777 stop.go:83] unable to get state: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:31.235745   32777 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:31.236138   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:31.284526   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:31.284587   32777 delete.go:82] Unable to get host status for offline-docker-615000, assuming it has already been deleted: state: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:31.284673   32777 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-615000
	W0415 06:06:31.332851   32777 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-615000 returned with exit code 1
	I0415 06:06:31.332892   32777 kic.go:371] could not find the container offline-docker-615000 to remove it. will try anyways
	I0415 06:06:31.332962   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:31.379960   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	W0415 06:06:31.380005   32777 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:31.380090   32777 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-615000 /bin/bash -c "sudo init 0"
	W0415 06:06:31.427264   32777 cli_runner.go:211] docker exec --privileged -t offline-docker-615000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 06:06:31.427303   32777 oci.go:650] error shutdown offline-docker-615000: docker exec --privileged -t offline-docker-615000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:32.428997   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:32.482530   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:32.482586   32777 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:32.482600   32777 oci.go:664] temporary error: container offline-docker-615000 status is  but expect it to be exited
	I0415 06:06:32.482620   32777 retry.go:31] will retry after 613.568099ms: couldn't verify container is exited. %v: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:33.098522   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:33.150584   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:33.150632   32777 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:33.150648   32777 oci.go:664] temporary error: container offline-docker-615000 status is  but expect it to be exited
	I0415 06:06:33.150674   32777 retry.go:31] will retry after 834.801216ms: couldn't verify container is exited. %v: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:33.987856   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:34.040808   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:34.040861   32777 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:34.040873   32777 oci.go:664] temporary error: container offline-docker-615000 status is  but expect it to be exited
	I0415 06:06:34.040900   32777 retry.go:31] will retry after 1.084997243s: couldn't verify container is exited. %v: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:35.127011   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:35.180319   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:35.180371   32777 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:35.180381   32777 oci.go:664] temporary error: container offline-docker-615000 status is  but expect it to be exited
	I0415 06:06:35.180402   32777 retry.go:31] will retry after 1.439051263s: couldn't verify container is exited. %v: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:36.620177   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:36.671378   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:36.671427   32777 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:36.671449   32777 oci.go:664] temporary error: container offline-docker-615000 status is  but expect it to be exited
	I0415 06:06:36.671472   32777 retry.go:31] will retry after 2.925207745s: couldn't verify container is exited. %v: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:39.598223   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:39.648708   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:39.648755   32777 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:39.648764   32777 oci.go:664] temporary error: container offline-docker-615000 status is  but expect it to be exited
	I0415 06:06:39.648793   32777 retry.go:31] will retry after 3.028977016s: couldn't verify container is exited. %v: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:42.678866   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:42.732258   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:42.732308   32777 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:42.732319   32777 oci.go:664] temporary error: container offline-docker-615000 status is  but expect it to be exited
	I0415 06:06:42.732339   32777 retry.go:31] will retry after 6.438146415s: couldn't verify container is exited. %v: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:49.171778   32777 cli_runner.go:164] Run: docker container inspect offline-docker-615000 --format={{.State.Status}}
	W0415 06:06:49.225649   32777 cli_runner.go:211] docker container inspect offline-docker-615000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:49.225698   32777 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:06:49.225710   32777 oci.go:664] temporary error: container offline-docker-615000 status is  but expect it to be exited
	I0415 06:06:49.225740   32777 oci.go:88] couldn't shut down offline-docker-615000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	 
	I0415 06:06:49.225816   32777 cli_runner.go:164] Run: docker rm -f -v offline-docker-615000
	I0415 06:06:49.274763   32777 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-615000
	W0415 06:06:49.323608   32777 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-615000 returned with exit code 1
	I0415 06:06:49.323728   32777 cli_runner.go:164] Run: docker network inspect offline-docker-615000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:06:49.372674   32777 cli_runner.go:164] Run: docker network rm offline-docker-615000
	I0415 06:06:49.478697   32777 fix.go:124] Sleeping 1 second for extra luck!
	I0415 06:06:50.480932   32777 start.go:125] createHost starting for "" (driver="docker")
	I0415 06:06:50.501980   32777 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 06:06:50.502155   32777 start.go:159] libmachine.API.Create for "offline-docker-615000" (driver="docker")
	I0415 06:06:50.502182   32777 client.go:168] LocalClient.Create starting
	I0415 06:06:50.502423   32777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 06:06:50.502528   32777 main.go:141] libmachine: Decoding PEM data...
	I0415 06:06:50.502552   32777 main.go:141] libmachine: Parsing certificate...
	I0415 06:06:50.502642   32777 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 06:06:50.502717   32777 main.go:141] libmachine: Decoding PEM data...
	I0415 06:06:50.502738   32777 main.go:141] libmachine: Parsing certificate...
	I0415 06:06:50.503519   32777 cli_runner.go:164] Run: docker network inspect offline-docker-615000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 06:06:50.556358   32777 cli_runner.go:211] docker network inspect offline-docker-615000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 06:06:50.556454   32777 network_create.go:281] running [docker network inspect offline-docker-615000] to gather additional debugging logs...
	I0415 06:06:50.556470   32777 cli_runner.go:164] Run: docker network inspect offline-docker-615000
	W0415 06:06:50.618447   32777 cli_runner.go:211] docker network inspect offline-docker-615000 returned with exit code 1
	I0415 06:06:50.618478   32777 network_create.go:284] error running [docker network inspect offline-docker-615000]: docker network inspect offline-docker-615000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-615000 not found
	I0415 06:06:50.618490   32777 network_create.go:286] output of [docker network inspect offline-docker-615000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-615000 not found
	
	** /stderr **
	I0415 06:06:50.618661   32777 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:06:50.671681   32777 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:06:50.673265   32777 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:06:50.674787   32777 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:06:50.676396   32777 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:06:50.678014   32777 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:06:50.678518   32777 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021dc320}
	I0415 06:06:50.678535   32777 network_create.go:124] attempt to create docker network offline-docker-615000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0415 06:06:50.678640   32777 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-615000 offline-docker-615000
	I0415 06:06:50.764950   32777 network_create.go:108] docker network offline-docker-615000 192.168.94.0/24 created
	I0415 06:06:50.765000   32777 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-615000" container
	I0415 06:06:50.765125   32777 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 06:06:50.817057   32777 cli_runner.go:164] Run: docker volume create offline-docker-615000 --label name.minikube.sigs.k8s.io=offline-docker-615000 --label created_by.minikube.sigs.k8s.io=true
	I0415 06:06:50.867066   32777 oci.go:103] Successfully created a docker volume offline-docker-615000
	I0415 06:06:50.867170   32777 cli_runner.go:164] Run: docker run --rm --name offline-docker-615000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-615000 --entrypoint /usr/bin/test -v offline-docker-615000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 06:06:51.138180   32777 oci.go:107] Successfully prepared a docker volume offline-docker-615000
	I0415 06:06:51.138223   32777 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:06:51.138236   32777 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 06:06:51.138339   32777 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-615000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 06:12:50.489814   32777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:12:50.490021   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:50.541936   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:50.542037   32777 retry.go:31] will retry after 287.752502ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:50.832152   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:50.884436   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:50.884551   32777 retry.go:31] will retry after 373.857142ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:51.260186   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:51.314073   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:51.314199   32777 retry.go:31] will retry after 345.828553ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:51.661849   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:51.714429   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	W0415 06:12:51.714547   32777 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	
	W0415 06:12:51.714568   32777 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:51.714622   32777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:12:51.714682   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:51.763023   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:51.763123   32777 retry.go:31] will retry after 242.634129ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:52.008117   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:52.058971   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:52.059064   32777 retry.go:31] will retry after 256.544896ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:52.317954   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:52.369866   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:52.369960   32777 retry.go:31] will retry after 479.360662ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:52.850132   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:52.900728   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	W0415 06:12:52.900842   32777 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	
	W0415 06:12:52.900860   32777 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:52.900875   32777 start.go:128] duration metric: took 6m2.432986587s to createHost
	I0415 06:12:52.900940   32777 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:12:52.901003   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:52.948999   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:52.949106   32777 retry.go:31] will retry after 228.728394ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:53.178618   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:53.231190   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:53.231294   32777 retry.go:31] will retry after 438.572937ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:53.672264   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:53.725779   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:53.725874   32777 retry.go:31] will retry after 456.730813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:54.184977   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:54.237011   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	W0415 06:12:54.237112   32777 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	
	W0415 06:12:54.237132   32777 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:54.237190   32777 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:12:54.237246   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:54.284715   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:54.284808   32777 retry.go:31] will retry after 229.840663ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:54.515102   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:54.567845   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:54.567945   32777 retry.go:31] will retry after 393.62379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:54.963963   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:55.017846   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	I0415 06:12:55.017946   32777 retry.go:31] will retry after 670.297358ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:55.690619   32777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000
	W0415 06:12:55.741652   32777 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000 returned with exit code 1
	W0415 06:12:55.741757   32777 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	
	W0415 06:12:55.741776   32777 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-615000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-615000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000
	I0415 06:12:55.741788   32777 fix.go:56] duration metric: took 6m24.668040664s for fixHost
	I0415 06:12:55.741794   32777 start.go:83] releasing machines lock for "offline-docker-615000", held for 6m24.668093166s
	W0415 06:12:55.741870   32777 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-615000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-615000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 06:12:55.790200   32777 out.go:177] 
	W0415 06:12:55.811252   32777 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 06:12:55.811282   32777 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 06:12:55.811312   32777 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 06:12:55.832522   32777 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-615000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-04-15 06:12:55.906597 -0700 PDT m=+6243.098210519
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-615000
helpers_test.go:235: (dbg) docker inspect offline-docker-615000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-615000",
	        "Id": "619d6e2448870317981016df031f8bf183bdc543a877302c063079b7b50a5fc0",
	        "Created": "2024-04-15T13:06:50.72575606Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-615000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-615000 -n offline-docker-615000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-615000 -n offline-docker-615000: exit status 7 (113.314792ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 06:12:56.069654   33658 status.go:249] status error: host: state: unknown state "offline-docker-615000": docker container inspect offline-docker-615000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-615000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-615000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-615000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-615000
--- FAIL: TestOffline (754.93s)

                                                
                                    
x
+
TestCertOptions (7201.314s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-822000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (3m18s)
	TestCertOptions (2m55s)
	TestNetworkPlugins (28m31s)

                                                
                                                
goroutine 2454 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 15 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00053cb60, 0xc000aa5bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007f6048, {0xd27bf20, 0x2a, 0x2a}, {0x8f2cbc5?, 0xa9bd4e8?, 0xd29e2c0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000b5ac80)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000b5ac80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00069eb80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 172 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0005a4a80, 0xc000988420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 36 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 35
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 1196 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002b3a840, 0xc002b22480)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1195
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 561 [syscall, 2 minutes]:
syscall.syscall6(0xc00282bf80?, 0x1000000000010?, 0x10000000019?, 0x54686958?, 0x90?, 0xdb7c108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0024298a0?, 0x8e6d165?, 0x90?, 0xbed9960?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x8f9df05?, 0xc0024298d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000b843f0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00278a2c0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00278a2c0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0021729c0, 0xc00278a2c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0021729c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0021729c0, 0xbf69b78)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 868 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc002230510, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xba84060?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0024d3560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002230540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009f9930, {0xbf75d80, 0xc000a1aba0}, 0x1, 0xc000988420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009f9930, 0x3b9aca00, 0x0, 0x1, 0xc000988420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 171 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a24ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 175 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0005a4a50, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xba84060?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a24a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0005a4a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009a56c0, {0xbf75d80, 0xc0009f7230}, 0x1, 0xc000988420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009a56c0, 0x3b9aca00, 0x0, 0x1, 0xc000988420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2429 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x54b68290, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00282cc60?, 0xc0021e0400?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00282cc60, {0xc0021e0400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002214398, {0xc0021e0400?, 0xc000af5d48?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021b0b10, {0xbf74788, 0xc002a00048})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xbf748c8, 0xc0021b0b10}, {0xbf74788, 0xc002a00048}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000af5e78?, {0xbf748c8, 0xc0021b0b10})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000af5f38?, {0xbf748c8?, 0xc0021b0b10?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xbf748c8, 0xc0021b0b10}, {0xbf74848, 0xc002214398}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b22780?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 562
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2135 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002bc3040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002bc3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc002bc3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc002bc3040, 0xbf69c20)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 176 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xbf986f0, 0xc000988420}, 0xc000a9bf50, 0xc002091f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xbf986f0, 0xc000988420}, 0x0?, 0xc000a9bf50, 0xc000a9bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xbf986f0?, 0xc000988420?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 172
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 209 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 176
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 862 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002230540, 0xc000988420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 753
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1013 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002424580, 0xc002347440)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1012
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1737 [syscall, 94 minutes]:
syscall.syscall(0x0?, 0xc002bb4e28?, 0xc000a9a6f0?, 0x8f0d05d?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc002bb4d20?, 0xc0006a5340?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1723
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2147 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00221a9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00221a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00221a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00221a9c0, 0xc00069e280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2039 [chan receive, 29 minutes]:
testing.(*T).Run(0xc002bc2000, {0xa965109?, 0x8e01a194b0c?}, 0xc00216e258)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002bc2000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002bc2000, 0xbf69c58)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 626 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0x54b68198, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00069fa00?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00069fa00)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00069fa00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000a339a0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000a339a0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0009c6d20, {0xbf8c080, 0xc000a339a0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0009c6d20)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0024bb1e0?, 0xc0024bb1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 607
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2126 [chan receive, 29 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00221a000, 0xc00216e258)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2039
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 562 [syscall, 3 minutes]:
syscall.syscall6(0xc0021b1f80?, 0x1000000000010?, 0x10000000019?, 0x54c4dd58?, 0x90?, 0xdb7c108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000affa40?, 0x8e6d165?, 0x90?, 0xbed9960?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x8f9df05?, 0xc000affa74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0022d04b0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00278a6e0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00278a6e0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002172b60, 0xc00278a6e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc002172b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc002172b60, 0xbf69b70)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2145 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00221a680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00221a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00221a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00221a680, 0xc00069e180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 869 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xbf986f0, 0xc000988420}, 0xc000a99f50, 0xc002092f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xbf986f0, 0xc000988420}, 0xa0?, 0xc000a99f50, 0xc000a99f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xbf986f0?, 0xc000988420?}, 0xc000a99fb0?, 0x93f14b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000a99fd0?, 0x8fe6ec4?, 0xc0005168a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 862
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2041 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002bc2340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002bc2340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc002bc2340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc002bc2340, 0xbf69c70)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1157 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00283dce0, 0xc00269fbc0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1156
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1266 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc002aa90e0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1258
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 1267 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc002aa90e0)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1258
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 870 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 869
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2040 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002bc21a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002bc21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc002bc21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc002bc21a0, 0xbf69c60)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2134 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002bc2ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002bc2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc002bc2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc002bc2ea0, 0xbf69ca8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2146 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00221a820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00221a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00221a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00221a820, 0xc00069e200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2128 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00221a4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00221a4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00221a4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00221a4e0, 0xc00069e100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1177 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc002a0d760, 0xc002a28b40)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 740
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2112 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002bc2820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002bc2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc002bc2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc002bc2820, 0xbf69ca0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 861 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0024d3680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 753
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2148 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00221ab60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00221ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00221ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00221ab60, 0xc00069e300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2439 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x54b68480, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00229a900?, 0xc0007efa98?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00229a900, {0xc0007efa98, 0x568, 0x568})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002a000c0, {0xc0007efa98?, 0xc002525500?, 0x22e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00282a570, {0xbf74788, 0xc002214250})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xbf748c8, 0xc00282a570}, {0xbf74788, 0xc002214250}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000af7678?, {0xbf748c8, 0xc00282a570})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000af7738?, {0xbf748c8?, 0xc00282a570?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xbf748c8, 0xc00282a570}, {0xbf74848, 0xc002a000c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b285a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 561
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2149 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00221ad00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00221ad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00221ad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00221ad00, 0xc00069e380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2127 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00221a340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00221a340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00221a340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00221a340, 0xc00069e000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2151 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00221b040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00221b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00221b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00221b040, 0xc00069e480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2150 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00221aea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00221aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00221aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00221aea0, 0xc00069e400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2136 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002bc31e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002bc31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc002bc31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc002bc31e0, 0xbf69c38)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2133 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0009b7b80)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002bc24e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002bc24e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc002bc24e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc002bc24e0, 0xbf69c80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2428 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x54b67fa8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00282cba0?, 0xc00216aaa1?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00282cba0, {0xc00216aaa1, 0x55f, 0x55f})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002214360, {0xc00216aaa1?, 0xc0027f41c0?, 0x237?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0021b0ae0, {0xbf74788, 0xc002a00040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xbf748c8, 0xc0021b0ae0}, {0xbf74788, 0xc002a00040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000af3e78?, {0xbf748c8, 0xc0021b0ae0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000af3f38?, {0xbf748c8?, 0xc0021b0ae0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xbf748c8, 0xc0021b0ae0}, {0xbf74848, 0xc002214360}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b28840?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 562
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2440 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x54b675f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00229a9c0?, 0xc00045d400?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00229a9c0, {0xc00045d400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002a000d8, {0xc00045d400?, 0xc0023ee700?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00282a5a0, {0xbf74788, 0xc002214268})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xbf748c8, 0xc00282a5a0}, {0xbf74788, 0xc002214268}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000af3678?, {0xbf748c8, 0xc00282a5a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000af3738?, {0xbf748c8?, 0xc00282a5a0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xbf748c8, 0xc00282a5a0}, {0xbf74848, 0xc002a000d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002ba84e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 561
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2430 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc00278a6e0, 0xc002b28900)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 562
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2441 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc00278a2c0, 0xc002a282a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 561
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                    
x
+
TestDockerFlags (752.61s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0415 06:14:41.545832   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 06:14:54.115725   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 06:19:24.677332   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 06:19:41.618949   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 06:19:54.190701   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 06:24:37.236345   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 06:24:41.609590   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 06:24:54.178872   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m31.308960268s)

                                                
                                                
-- stdout --
	* [docker-flags-007000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-007000" primary control-plane node in "docker-flags-007000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-007000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 06:13:25.702526   33811 out.go:291] Setting OutFile to fd 1 ...
	I0415 06:13:25.702784   33811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 06:13:25.702789   33811 out.go:304] Setting ErrFile to fd 2...
	I0415 06:13:25.702793   33811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 06:13:25.702976   33811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 06:13:25.704470   33811 out.go:298] Setting JSON to false
	I0415 06:13:25.726573   33811 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":11575,"bootTime":1713175230,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 06:13:25.726667   33811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 06:13:25.748063   33811 out.go:177] * [docker-flags-007000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 06:13:25.769677   33811 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 06:13:25.769723   33811 notify.go:220] Checking for updates...
	I0415 06:13:25.812728   33811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 06:13:25.833635   33811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 06:13:25.854831   33811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 06:13:25.876766   33811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	I0415 06:13:25.898664   33811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 06:13:25.920598   33811 config.go:182] Loaded profile config "force-systemd-flag-656000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 06:13:25.920757   33811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 06:13:25.976272   33811 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 06:13:25.976432   33811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 06:13:26.083339   33811 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:115 OomKillDisable:false NGoroutines:233 SystemTime:2024-04-15 13:13:26.072733077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 06:13:26.105442   33811 out.go:177] * Using the docker driver based on user configuration
	I0415 06:13:26.126950   33811 start.go:297] selected driver: docker
	I0415 06:13:26.126983   33811 start.go:901] validating driver "docker" against <nil>
	I0415 06:13:26.126998   33811 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 06:13:26.131386   33811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 06:13:26.237894   33811 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:115 OomKillDisable:false NGoroutines:233 SystemTime:2024-04-15 13:13:26.228202994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 06:13:26.238089   33811 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 06:13:26.238271   33811 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0415 06:13:26.259518   33811 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 06:13:26.280633   33811 cni.go:84] Creating CNI manager for ""
	I0415 06:13:26.280680   33811 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 06:13:26.280696   33811 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 06:13:26.280784   33811 start.go:340] cluster config:
	{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-007000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 06:13:26.302281   33811 out.go:177] * Starting "docker-flags-007000" primary control-plane node in "docker-flags-007000" cluster
	I0415 06:13:26.344560   33811 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 06:13:26.366293   33811 out.go:177] * Pulling base image v0.0.43-1712854342-18621 ...
	I0415 06:13:26.408402   33811 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:13:26.408458   33811 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 06:13:26.408495   33811 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 06:13:26.408516   33811 cache.go:56] Caching tarball of preloaded images
	I0415 06:13:26.408757   33811 preload.go:173] Found /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 06:13:26.408778   33811 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 06:13:26.409624   33811 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/docker-flags-007000/config.json ...
	I0415 06:13:26.409837   33811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/docker-flags-007000/config.json: {Name:mk4cfb7288331704f284f290ee0afc1eede137d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 06:13:26.461137   33811 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon, skipping pull
	I0415 06:13:26.461156   33811 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in daemon, skipping load
	I0415 06:13:26.461180   33811 cache.go:194] Successfully downloaded all kic artifacts
	I0415 06:13:26.461235   33811 start.go:360] acquireMachinesLock for docker-flags-007000: {Name:mkc85f29eab422537489e914b70c3f372231bf2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 06:13:26.461397   33811 start.go:364] duration metric: took 149.206µs to acquireMachinesLock for "docker-flags-007000"
	I0415 06:13:26.461426   33811 start.go:93] Provisioning new machine with config: &{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-007000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 06:13:26.461507   33811 start.go:125] createHost starting for "" (driver="docker")
	I0415 06:13:26.483392   33811 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 06:13:26.483749   33811 start.go:159] libmachine.API.Create for "docker-flags-007000" (driver="docker")
	I0415 06:13:26.483792   33811 client.go:168] LocalClient.Create starting
	I0415 06:13:26.484041   33811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 06:13:26.484147   33811 main.go:141] libmachine: Decoding PEM data...
	I0415 06:13:26.484180   33811 main.go:141] libmachine: Parsing certificate...
	I0415 06:13:26.484274   33811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 06:13:26.484350   33811 main.go:141] libmachine: Decoding PEM data...
	I0415 06:13:26.484365   33811 main.go:141] libmachine: Parsing certificate...
	I0415 06:13:26.485264   33811 cli_runner.go:164] Run: docker network inspect docker-flags-007000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 06:13:26.536132   33811 cli_runner.go:211] docker network inspect docker-flags-007000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 06:13:26.536236   33811 network_create.go:281] running [docker network inspect docker-flags-007000] to gather additional debugging logs...
	I0415 06:13:26.536252   33811 cli_runner.go:164] Run: docker network inspect docker-flags-007000
	W0415 06:13:26.584707   33811 cli_runner.go:211] docker network inspect docker-flags-007000 returned with exit code 1
	I0415 06:13:26.584740   33811 network_create.go:284] error running [docker network inspect docker-flags-007000]: docker network inspect docker-flags-007000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-007000 not found
	I0415 06:13:26.584778   33811 network_create.go:286] output of [docker network inspect docker-flags-007000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-007000 not found
	
	** /stderr **
	I0415 06:13:26.584900   33811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:13:26.635338   33811 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:13:26.636769   33811 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:13:26.638249   33811 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:13:26.638731   33811 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002400000}
	I0415 06:13:26.638786   33811 network_create.go:124] attempt to create docker network docker-flags-007000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 06:13:26.638936   33811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-007000 docker-flags-007000
	W0415 06:13:26.687486   33811 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-007000 docker-flags-007000 returned with exit code 1
	W0415 06:13:26.687519   33811 network_create.go:149] failed to create docker network docker-flags-007000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-007000 docker-flags-007000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 06:13:26.687539   33811 network_create.go:116] failed to create docker network docker-flags-007000 192.168.76.0/24, will retry: subnet is taken
	I0415 06:13:26.688925   33811 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:13:26.689282   33811 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002354fe0}
	I0415 06:13:26.689297   33811 network_create.go:124] attempt to create docker network docker-flags-007000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 06:13:26.689368   33811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-007000 docker-flags-007000
	I0415 06:13:26.772908   33811 network_create.go:108] docker network docker-flags-007000 192.168.85.0/24 created
	I0415 06:13:26.772946   33811 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-007000" container
	I0415 06:13:26.773069   33811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 06:13:26.824221   33811 cli_runner.go:164] Run: docker volume create docker-flags-007000 --label name.minikube.sigs.k8s.io=docker-flags-007000 --label created_by.minikube.sigs.k8s.io=true
	I0415 06:13:26.877049   33811 oci.go:103] Successfully created a docker volume docker-flags-007000
	I0415 06:13:26.877154   33811 cli_runner.go:164] Run: docker run --rm --name docker-flags-007000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-007000 --entrypoint /usr/bin/test -v docker-flags-007000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 06:13:27.189713   33811 oci.go:107] Successfully prepared a docker volume docker-flags-007000
	I0415 06:13:27.189756   33811 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:13:27.189772   33811 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 06:13:27.189873   33811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-007000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 06:19:26.555627   33811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:19:26.555707   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:19:26.605339   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:19:26.605464   33811 retry.go:31] will retry after 131.025682ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:26.738671   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:19:26.789111   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:19:26.789220   33811 retry.go:31] will retry after 202.06541ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:26.992206   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:19:27.040918   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:19:27.041017   33811 retry.go:31] will retry after 519.238689ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:27.562632   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:19:27.614188   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	W0415 06:19:27.614290   33811 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	
	W0415 06:19:27.614314   33811 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:27.614378   33811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:19:27.614429   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:19:27.662687   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:19:27.662775   33811 retry.go:31] will retry after 305.389081ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:27.969152   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:19:28.019377   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:19:28.019473   33811 retry.go:31] will retry after 536.720343ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:28.558551   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:19:28.610457   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:19:28.610560   33811 retry.go:31] will retry after 644.609356ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:29.256565   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:19:29.306790   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	W0415 06:19:29.306889   33811 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	
	W0415 06:19:29.306916   33811 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:29.306933   33811 start.go:128] duration metric: took 6m2.774308056s to createHost
	I0415 06:19:29.306940   33811 start.go:83] releasing machines lock for "docker-flags-007000", held for 6m2.774431992s
	W0415 06:19:29.306957   33811 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 06:19:29.307389   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:29.355334   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:29.355390   33811 delete.go:82] Unable to get host status for docker-flags-007000, assuming it has already been deleted: state: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	W0415 06:19:29.355487   33811 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 06:19:29.355498   33811 start.go:728] Will try again in 5 seconds ...
	I0415 06:19:34.357570   33811 start.go:360] acquireMachinesLock for docker-flags-007000: {Name:mkc85f29eab422537489e914b70c3f372231bf2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 06:19:34.357789   33811 start.go:364] duration metric: took 166.596µs to acquireMachinesLock for "docker-flags-007000"
	I0415 06:19:34.357823   33811 start.go:96] Skipping create...Using existing machine configuration
	I0415 06:19:34.357842   33811 fix.go:54] fixHost starting: 
	I0415 06:19:34.358291   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:34.411187   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:34.411235   33811 fix.go:112] recreateIfNeeded on docker-flags-007000: state= err=unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:34.411256   33811 fix.go:117] machineExists: false. err=machine does not exist
	I0415 06:19:34.455816   33811 out.go:177] * docker "docker-flags-007000" container is missing, will recreate.
	I0415 06:19:34.476695   33811 delete.go:124] DEMOLISHING docker-flags-007000 ...
	I0415 06:19:34.476926   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:34.526790   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	W0415 06:19:34.526843   33811 stop.go:83] unable to get state: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:34.526860   33811 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:34.527245   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:34.628504   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:34.628555   33811 delete.go:82] Unable to get host status for docker-flags-007000, assuming it has already been deleted: state: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:34.628639   33811 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-007000
	W0415 06:19:34.676731   33811 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-007000 returned with exit code 1
	I0415 06:19:34.676767   33811 kic.go:371] could not find the container docker-flags-007000 to remove it. will try anyways
	I0415 06:19:34.676838   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:34.724526   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	W0415 06:19:34.724573   33811 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:34.724657   33811 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-007000 /bin/bash -c "sudo init 0"
	W0415 06:19:34.773701   33811 cli_runner.go:211] docker exec --privileged -t docker-flags-007000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 06:19:34.773734   33811 oci.go:650] error shutdown docker-flags-007000: docker exec --privileged -t docker-flags-007000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:35.775862   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:35.851044   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:35.851094   33811 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:35.851106   33811 oci.go:664] temporary error: container docker-flags-007000 status is  but expect it to be exited
	I0415 06:19:35.851131   33811 retry.go:31] will retry after 656.06863ms: couldn't verify container is exited. %v: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:36.509516   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:36.563604   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:36.563655   33811 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:36.563669   33811 oci.go:664] temporary error: container docker-flags-007000 status is  but expect it to be exited
	I0415 06:19:36.563698   33811 retry.go:31] will retry after 856.537591ms: couldn't verify container is exited. %v: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:37.422601   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:37.477060   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:37.477113   33811 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:37.477124   33811 oci.go:664] temporary error: container docker-flags-007000 status is  but expect it to be exited
	I0415 06:19:37.477147   33811 retry.go:31] will retry after 1.260450573s: couldn't verify container is exited. %v: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:38.739908   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:38.794075   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:38.794121   33811 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:38.794130   33811 oci.go:664] temporary error: container docker-flags-007000 status is  but expect it to be exited
	I0415 06:19:38.794155   33811 retry.go:31] will retry after 1.899576178s: couldn't verify container is exited. %v: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:40.695991   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:40.749515   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:40.749561   33811 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:40.749571   33811 oci.go:664] temporary error: container docker-flags-007000 status is  but expect it to be exited
	I0415 06:19:40.749599   33811 retry.go:31] will retry after 1.509465052s: couldn't verify container is exited. %v: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:42.259381   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:42.309717   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:42.309761   33811 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:42.309770   33811 oci.go:664] temporary error: container docker-flags-007000 status is  but expect it to be exited
	I0415 06:19:42.309794   33811 retry.go:31] will retry after 2.537304444s: couldn't verify container is exited. %v: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:44.848239   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:44.900307   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:44.900355   33811 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:44.900366   33811 oci.go:664] temporary error: container docker-flags-007000 status is  but expect it to be exited
	I0415 06:19:44.900392   33811 retry.go:31] will retry after 4.627740455s: couldn't verify container is exited. %v: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:49.529341   33811 cli_runner.go:164] Run: docker container inspect docker-flags-007000 --format={{.State.Status}}
	W0415 06:19:49.627858   33811 cli_runner.go:211] docker container inspect docker-flags-007000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:49.627919   33811 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:19:49.627931   33811 oci.go:664] temporary error: container docker-flags-007000 status is  but expect it to be exited
	I0415 06:19:49.627975   33811 oci.go:88] couldn't shut down docker-flags-007000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	 
	I0415 06:19:49.628040   33811 cli_runner.go:164] Run: docker rm -f -v docker-flags-007000
	I0415 06:19:49.677284   33811 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-007000
	W0415 06:19:49.726052   33811 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-007000 returned with exit code 1
	I0415 06:19:49.726158   33811 cli_runner.go:164] Run: docker network inspect docker-flags-007000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:19:49.774797   33811 cli_runner.go:164] Run: docker network rm docker-flags-007000
	I0415 06:19:49.881195   33811 fix.go:124] Sleeping 1 second for extra luck!
	I0415 06:19:50.881985   33811 start.go:125] createHost starting for "" (driver="docker")
	I0415 06:19:50.904156   33811 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 06:19:50.904378   33811 start.go:159] libmachine.API.Create for "docker-flags-007000" (driver="docker")
	I0415 06:19:50.904404   33811 client.go:168] LocalClient.Create starting
	I0415 06:19:50.904628   33811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 06:19:50.904721   33811 main.go:141] libmachine: Decoding PEM data...
	I0415 06:19:50.904749   33811 main.go:141] libmachine: Parsing certificate...
	I0415 06:19:50.904841   33811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 06:19:50.904916   33811 main.go:141] libmachine: Decoding PEM data...
	I0415 06:19:50.904931   33811 main.go:141] libmachine: Parsing certificate...
	I0415 06:19:50.905621   33811 cli_runner.go:164] Run: docker network inspect docker-flags-007000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 06:19:50.956436   33811 cli_runner.go:211] docker network inspect docker-flags-007000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 06:19:50.956531   33811 network_create.go:281] running [docker network inspect docker-flags-007000] to gather additional debugging logs...
	I0415 06:19:50.956549   33811 cli_runner.go:164] Run: docker network inspect docker-flags-007000
	W0415 06:19:51.007144   33811 cli_runner.go:211] docker network inspect docker-flags-007000 returned with exit code 1
	I0415 06:19:51.007174   33811 network_create.go:284] error running [docker network inspect docker-flags-007000]: docker network inspect docker-flags-007000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-007000 not found
	I0415 06:19:51.007188   33811 network_create.go:286] output of [docker network inspect docker-flags-007000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-007000 not found
	
	** /stderr **
	I0415 06:19:51.007343   33811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:19:51.057739   33811 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:51.059333   33811 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:51.061080   33811 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:51.062741   33811 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:51.064295   33811 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:51.065861   33811 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:51.066195   33811 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022dccf0}
	I0415 06:19:51.066207   33811 network_create.go:124] attempt to create docker network docker-flags-007000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0415 06:19:51.066288   33811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-007000 docker-flags-007000
	I0415 06:19:51.152580   33811 network_create.go:108] docker network docker-flags-007000 192.168.103.0/24 created
	I0415 06:19:51.152623   33811 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-007000" container
	I0415 06:19:51.152735   33811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 06:19:51.204116   33811 cli_runner.go:164] Run: docker volume create docker-flags-007000 --label name.minikube.sigs.k8s.io=docker-flags-007000 --label created_by.minikube.sigs.k8s.io=true
	I0415 06:19:51.253998   33811 oci.go:103] Successfully created a docker volume docker-flags-007000
	I0415 06:19:51.254106   33811 cli_runner.go:164] Run: docker run --rm --name docker-flags-007000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-007000 --entrypoint /usr/bin/test -v docker-flags-007000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 06:19:51.510661   33811 oci.go:107] Successfully prepared a docker volume docker-flags-007000
	I0415 06:19:51.510694   33811 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:19:51.510707   33811 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 06:19:51.510817   33811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-007000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 06:25:50.894952   33811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:25:50.895074   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:50.948980   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:50.949101   33811 retry.go:31] will retry after 288.855252ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:51.239268   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:51.289992   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:51.290115   33811 retry.go:31] will retry after 202.925128ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:51.495513   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:51.549133   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:51.549231   33811 retry.go:31] will retry after 348.606629ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:51.900262   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:51.954437   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:51.954539   33811 retry.go:31] will retry after 509.765252ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:52.465136   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:52.516789   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	W0415 06:25:52.516891   33811 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	
	W0415 06:25:52.516915   33811 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:52.516971   33811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:25:52.517022   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:52.567180   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:52.567292   33811 retry.go:31] will retry after 236.410387ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:52.806169   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:52.858120   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:52.858221   33811 retry.go:31] will retry after 221.357029ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:53.081919   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:53.136953   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:53.137065   33811 retry.go:31] will retry after 652.311113ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:53.791788   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:53.846464   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	W0415 06:25:53.846568   33811 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	
	W0415 06:25:53.846593   33811 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:53.846604   33811 start.go:128] duration metric: took 6m2.976511159s to createHost
	I0415 06:25:53.846674   33811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:25:53.846728   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:53.896482   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:53.896575   33811 retry.go:31] will retry after 254.081732ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:54.152269   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:54.203909   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:54.204006   33811 retry.go:31] will retry after 437.128204ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:54.643480   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:54.694498   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:54.694594   33811 retry.go:31] will retry after 609.89842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:55.306911   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:55.361228   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	W0415 06:25:55.361333   33811 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	
	W0415 06:25:55.361348   33811 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:55.361415   33811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:25:55.361466   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:55.410453   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:55.410544   33811 retry.go:31] will retry after 226.851264ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:55.639749   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:55.691107   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:55.691206   33811 retry.go:31] will retry after 277.622203ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:55.971263   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:56.023852   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	I0415 06:25:56.023945   33811 retry.go:31] will retry after 779.179467ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:56.805467   33811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000
	W0415 06:25:56.858890   33811 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000 returned with exit code 1
	W0415 06:25:56.858990   33811 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	
	W0415 06:25:56.859008   33811 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-007000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-007000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	I0415 06:25:56.859020   33811 fix.go:56] duration metric: took 6m22.513765052s for fixHost
	I0415 06:25:56.859026   33811 start.go:83] releasing machines lock for "docker-flags-007000", held for 6m22.513806508s
	W0415 06:25:56.859107   33811 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-007000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-007000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 06:25:56.902699   33811 out.go:177] 
	W0415 06:25:56.923889   33811 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 06:25:56.923972   33811 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 06:25:56.923994   33811 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 06:25:56.945888   33811 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (206.157551ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-007000 host status: state: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (197.802796ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-007000 host status: state: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-15 06:25:57.425852 -0700 PDT m=+7024.560228997
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-007000
helpers_test.go:235: (dbg) docker inspect docker-flags-007000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-007000",
	        "Id": "f0e21f14c99bdea68e90e1e99416fa71d63efdeb315704f5f51c34adc2b9fae1",
	        "Created": "2024-04-15T13:19:51.113680305Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-007000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-007000 -n docker-flags-007000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-007000 -n docker-flags-007000: exit status 7 (113.366482ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 06:25:57.589023   34339 status.go:249] status error: host: state: unknown state "docker-flags-007000": docker container inspect docker-flags-007000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-007000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-007000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-007000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-007000
--- FAIL: TestDockerFlags (752.61s)

                                                
                                    
x
+
TestForceSystemdFlag (757.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-656000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-656000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m36.615082604s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-656000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-656000" primary control-plane node in "force-systemd-flag-656000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-656000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 06:12:56.848585   33682 out.go:291] Setting OutFile to fd 1 ...
	I0415 06:12:56.848861   33682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 06:12:56.848866   33682 out.go:304] Setting ErrFile to fd 2...
	I0415 06:12:56.848870   33682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 06:12:56.849075   33682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 06:12:56.850538   33682 out.go:298] Setting JSON to false
	I0415 06:12:56.872739   33682 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":11546,"bootTime":1713175230,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 06:12:56.872822   33682 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 06:12:56.895048   33682 out.go:177] * [force-systemd-flag-656000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 06:12:56.916858   33682 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 06:12:56.916904   33682 notify.go:220] Checking for updates...
	I0415 06:12:56.960685   33682 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 06:12:56.981652   33682 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 06:12:57.002758   33682 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 06:12:57.023741   33682 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	I0415 06:12:57.044604   33682 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 06:12:57.066387   33682 config.go:182] Loaded profile config "force-systemd-env-830000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 06:12:57.066574   33682 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 06:12:57.121925   33682 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 06:12:57.122105   33682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 06:12:57.228680   33682 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:111 OomKillDisable:false NGoroutines:223 SystemTime:2024-04-15 13:12:57.21804087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 06:12:57.250567   33682 out.go:177] * Using the docker driver based on user configuration
	I0415 06:12:57.272267   33682 start.go:297] selected driver: docker
	I0415 06:12:57.272299   33682 start.go:901] validating driver "docker" against <nil>
	I0415 06:12:57.272314   33682 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 06:12:57.276724   33682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 06:12:57.383088   33682 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:111 OomKillDisable:false NGoroutines:223 SystemTime:2024-04-15 13:12:57.373358387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 06:12:57.383295   33682 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 06:12:57.383480   33682 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 06:12:57.404595   33682 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 06:12:57.425663   33682 cni.go:84] Creating CNI manager for ""
	I0415 06:12:57.425707   33682 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 06:12:57.425732   33682 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 06:12:57.425872   33682 start.go:340] cluster config:
	{Name:force-systemd-flag-656000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-656000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 06:12:57.447712   33682 out.go:177] * Starting "force-systemd-flag-656000" primary control-plane node in "force-systemd-flag-656000" cluster
	I0415 06:12:57.489728   33682 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 06:12:57.511564   33682 out.go:177] * Pulling base image v0.0.43-1712854342-18621 ...
	I0415 06:12:57.553740   33682 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:12:57.553789   33682 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 06:12:57.553827   33682 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 06:12:57.553843   33682 cache.go:56] Caching tarball of preloaded images
	I0415 06:12:57.554053   33682 preload.go:173] Found /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 06:12:57.554073   33682 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 06:12:57.554197   33682 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/force-systemd-flag-656000/config.json ...
	I0415 06:12:57.554908   33682 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/force-systemd-flag-656000/config.json: {Name:mk5af4820e765e788c255d67244064dc27cbc736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 06:12:57.605956   33682 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon, skipping pull
	I0415 06:12:57.605973   33682 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in daemon, skipping load
	I0415 06:12:57.606000   33682 cache.go:194] Successfully downloaded all kic artifacts
	I0415 06:12:57.606052   33682 start.go:360] acquireMachinesLock for force-systemd-flag-656000: {Name:mk2583fc98debe7672a0fb2833de17d74f3021a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 06:12:57.606222   33682 start.go:364] duration metric: took 156.887µs to acquireMachinesLock for "force-systemd-flag-656000"
	I0415 06:12:57.606253   33682 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-656000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-656000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 06:12:57.606439   33682 start.go:125] createHost starting for "" (driver="docker")
	I0415 06:12:57.627757   33682 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 06:12:57.628173   33682 start.go:159] libmachine.API.Create for "force-systemd-flag-656000" (driver="docker")
	I0415 06:12:57.628224   33682 client.go:168] LocalClient.Create starting
	I0415 06:12:57.628412   33682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 06:12:57.628508   33682 main.go:141] libmachine: Decoding PEM data...
	I0415 06:12:57.628538   33682 main.go:141] libmachine: Parsing certificate...
	I0415 06:12:57.628629   33682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 06:12:57.628716   33682 main.go:141] libmachine: Decoding PEM data...
	I0415 06:12:57.628732   33682 main.go:141] libmachine: Parsing certificate...
	I0415 06:12:57.629609   33682 cli_runner.go:164] Run: docker network inspect force-systemd-flag-656000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 06:12:57.678977   33682 cli_runner.go:211] docker network inspect force-systemd-flag-656000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 06:12:57.679087   33682 network_create.go:281] running [docker network inspect force-systemd-flag-656000] to gather additional debugging logs...
	I0415 06:12:57.679101   33682 cli_runner.go:164] Run: docker network inspect force-systemd-flag-656000
	W0415 06:12:57.727790   33682 cli_runner.go:211] docker network inspect force-systemd-flag-656000 returned with exit code 1
	I0415 06:12:57.727820   33682 network_create.go:284] error running [docker network inspect force-systemd-flag-656000]: docker network inspect force-systemd-flag-656000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-656000 not found
	I0415 06:12:57.727838   33682 network_create.go:286] output of [docker network inspect force-systemd-flag-656000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-656000 not found
	
	** /stderr **
	I0415 06:12:57.727989   33682 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:12:57.778638   33682 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:12:57.780064   33682 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:12:57.780426   33682 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d075b0}
	I0415 06:12:57.780443   33682 network_create.go:124] attempt to create docker network force-systemd-flag-656000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 06:12:57.780515   33682 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-656000 force-systemd-flag-656000
	I0415 06:12:57.866003   33682 network_create.go:108] docker network force-systemd-flag-656000 192.168.67.0/24 created
	I0415 06:12:57.866049   33682 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-656000" container
	I0415 06:12:57.866162   33682 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 06:12:57.917037   33682 cli_runner.go:164] Run: docker volume create force-systemd-flag-656000 --label name.minikube.sigs.k8s.io=force-systemd-flag-656000 --label created_by.minikube.sigs.k8s.io=true
	I0415 06:12:57.966808   33682 oci.go:103] Successfully created a docker volume force-systemd-flag-656000
	I0415 06:12:57.966919   33682 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-656000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-656000 --entrypoint /usr/bin/test -v force-systemd-flag-656000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 06:12:58.285749   33682 oci.go:107] Successfully prepared a docker volume force-systemd-flag-656000
	I0415 06:12:58.285804   33682 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:12:58.285820   33682 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 06:12:58.285929   33682 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-656000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 06:18:57.701740   33682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:18:57.701884   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:18:57.754043   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:18:57.754179   33682 retry.go:31] will retry after 321.726366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:18:58.078325   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:18:58.129920   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:18:58.130012   33682 retry.go:31] will retry after 461.156179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:18:58.591743   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:18:58.646138   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:18:58.646251   33682 retry.go:31] will retry after 570.661994ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:18:59.219232   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:18:59.274270   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	W0415 06:18:59.274380   33682 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	
	W0415 06:18:59.274401   33682 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:18:59.274472   33682 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:18:59.274531   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:18:59.323899   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:18:59.323997   33682 retry.go:31] will retry after 326.535729ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:18:59.652920   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:18:59.704565   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:18:59.704664   33682 retry.go:31] will retry after 489.710308ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:00.195882   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:19:00.247778   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:19:00.247869   33682 retry.go:31] will retry after 808.132492ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:01.058317   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:19:01.110491   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	W0415 06:19:01.110596   33682 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	
	W0415 06:19:01.110638   33682 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:01.110656   33682 start.go:128] duration metric: took 6m3.433213699s to createHost
	I0415 06:19:01.110664   33682 start.go:83] releasing machines lock for "force-systemd-flag-656000", held for 6m3.433442831s
	W0415 06:19:01.110680   33682 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 06:19:01.111116   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:01.159774   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:01.159838   33682 delete.go:82] Unable to get host status for force-systemd-flag-656000, assuming it has already been deleted: state: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	W0415 06:19:01.159939   33682 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 06:19:01.159948   33682 start.go:728] Will try again in 5 seconds ...
	I0415 06:19:06.161144   33682 start.go:360] acquireMachinesLock for force-systemd-flag-656000: {Name:mk2583fc98debe7672a0fb2833de17d74f3021a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 06:19:06.162093   33682 start.go:364] duration metric: took 882.587µs to acquireMachinesLock for "force-systemd-flag-656000"
	I0415 06:19:06.162178   33682 start.go:96] Skipping create...Using existing machine configuration
	I0415 06:19:06.162197   33682 fix.go:54] fixHost starting: 
	I0415 06:19:06.162747   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:06.214645   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:06.214688   33682 fix.go:112] recreateIfNeeded on force-systemd-flag-656000: state= err=unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:06.214706   33682 fix.go:117] machineExists: false. err=machine does not exist
	I0415 06:19:06.236923   33682 out.go:177] * docker "force-systemd-flag-656000" container is missing, will recreate.
	I0415 06:19:06.279498   33682 delete.go:124] DEMOLISHING force-systemd-flag-656000 ...
	I0415 06:19:06.279708   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:06.329490   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	W0415 06:19:06.329551   33682 stop.go:83] unable to get state: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:06.329570   33682 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:06.329961   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:06.377901   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:06.377961   33682 delete.go:82] Unable to get host status for force-systemd-flag-656000, assuming it has already been deleted: state: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:06.378051   33682 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-656000
	W0415 06:19:06.425952   33682 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-656000 returned with exit code 1
	I0415 06:19:06.425996   33682 kic.go:371] could not find the container force-systemd-flag-656000 to remove it. will try anyways
	I0415 06:19:06.426080   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:06.474942   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	W0415 06:19:06.475003   33682 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:06.475086   33682 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-656000 /bin/bash -c "sudo init 0"
	W0415 06:19:06.523641   33682 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-656000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 06:19:06.523678   33682 oci.go:650] error shutdown force-systemd-flag-656000: docker exec --privileged -t force-systemd-flag-656000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:07.525549   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:07.578965   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:07.579008   33682 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:07.579028   33682 oci.go:664] temporary error: container force-systemd-flag-656000 status is  but expect it to be exited
	I0415 06:19:07.579050   33682 retry.go:31] will retry after 535.360166ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:08.115246   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:08.168175   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:08.168219   33682 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:08.168233   33682 oci.go:664] temporary error: container force-systemd-flag-656000 status is  but expect it to be exited
	I0415 06:19:08.168256   33682 retry.go:31] will retry after 1.052286285s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:09.222925   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:09.274444   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:09.274492   33682 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:09.274503   33682 oci.go:664] temporary error: container force-systemd-flag-656000 status is  but expect it to be exited
	I0415 06:19:09.274526   33682 retry.go:31] will retry after 919.888407ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:10.195192   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:10.249608   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:10.249661   33682 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:10.249670   33682 oci.go:664] temporary error: container force-systemd-flag-656000 status is  but expect it to be exited
	I0415 06:19:10.249692   33682 retry.go:31] will retry after 1.137833962s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:11.388460   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:11.440179   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:11.440226   33682 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:11.440236   33682 oci.go:664] temporary error: container force-systemd-flag-656000 status is  but expect it to be exited
	I0415 06:19:11.440262   33682 retry.go:31] will retry after 3.124596815s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:14.567132   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:14.619082   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:14.619128   33682 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:14.619142   33682 oci.go:664] temporary error: container force-systemd-flag-656000 status is  but expect it to be exited
	I0415 06:19:14.619168   33682 retry.go:31] will retry after 3.217110455s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:17.837773   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:17.891294   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:17.891337   33682 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:17.891351   33682 oci.go:664] temporary error: container force-systemd-flag-656000 status is  but expect it to be exited
	I0415 06:19:17.891377   33682 retry.go:31] will retry after 7.114349807s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:25.006332   33682 cli_runner.go:164] Run: docker container inspect force-systemd-flag-656000 --format={{.State.Status}}
	W0415 06:19:25.059479   33682 cli_runner.go:211] docker container inspect force-systemd-flag-656000 --format={{.State.Status}} returned with exit code 1
	I0415 06:19:25.059531   33682 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:19:25.059545   33682 oci.go:664] temporary error: container force-systemd-flag-656000 status is  but expect it to be exited
	I0415 06:19:25.059575   33682 oci.go:88] couldn't shut down force-systemd-flag-656000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	 
	I0415 06:19:25.059654   33682 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-656000
	I0415 06:19:25.109678   33682 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-656000
	W0415 06:19:25.157300   33682 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-656000 returned with exit code 1
	I0415 06:19:25.157416   33682 cli_runner.go:164] Run: docker network inspect force-systemd-flag-656000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:19:25.206415   33682 cli_runner.go:164] Run: docker network rm force-systemd-flag-656000
	I0415 06:19:25.314765   33682 fix.go:124] Sleeping 1 second for extra luck!
	I0415 06:19:26.316883   33682 start.go:125] createHost starting for "" (driver="docker")
	I0415 06:19:26.338762   33682 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 06:19:26.338923   33682 start.go:159] libmachine.API.Create for "force-systemd-flag-656000" (driver="docker")
	I0415 06:19:26.338956   33682 client.go:168] LocalClient.Create starting
	I0415 06:19:26.339176   33682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 06:19:26.339280   33682 main.go:141] libmachine: Decoding PEM data...
	I0415 06:19:26.339305   33682 main.go:141] libmachine: Parsing certificate...
	I0415 06:19:26.339391   33682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 06:19:26.339466   33682 main.go:141] libmachine: Decoding PEM data...
	I0415 06:19:26.339495   33682 main.go:141] libmachine: Parsing certificate...
	I0415 06:19:26.360358   33682 cli_runner.go:164] Run: docker network inspect force-systemd-flag-656000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 06:19:26.411143   33682 cli_runner.go:211] docker network inspect force-systemd-flag-656000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 06:19:26.411236   33682 network_create.go:281] running [docker network inspect force-systemd-flag-656000] to gather additional debugging logs...
	I0415 06:19:26.411253   33682 cli_runner.go:164] Run: docker network inspect force-systemd-flag-656000
	W0415 06:19:26.459974   33682 cli_runner.go:211] docker network inspect force-systemd-flag-656000 returned with exit code 1
	I0415 06:19:26.460002   33682 network_create.go:284] error running [docker network inspect force-systemd-flag-656000]: docker network inspect force-systemd-flag-656000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-656000 not found
	I0415 06:19:26.460014   33682 network_create.go:286] output of [docker network inspect force-systemd-flag-656000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-656000 not found
	
	** /stderr **
	I0415 06:19:26.460157   33682 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:19:26.510756   33682 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:26.512312   33682 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:26.513900   33682 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:26.515497   33682 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:26.517030   33682 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:19:26.517406   33682 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00229d780}
	I0415 06:19:26.517420   33682 network_create.go:124] attempt to create docker network force-systemd-flag-656000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0415 06:19:26.517493   33682 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-656000 force-systemd-flag-656000
	I0415 06:19:26.603651   33682 network_create.go:108] docker network force-systemd-flag-656000 192.168.94.0/24 created
	I0415 06:19:26.603776   33682 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-656000" container
	I0415 06:19:26.603886   33682 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 06:19:26.654923   33682 cli_runner.go:164] Run: docker volume create force-systemd-flag-656000 --label name.minikube.sigs.k8s.io=force-systemd-flag-656000 --label created_by.minikube.sigs.k8s.io=true
	I0415 06:19:26.703149   33682 oci.go:103] Successfully created a docker volume force-systemd-flag-656000
	I0415 06:19:26.703258   33682 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-656000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-656000 --entrypoint /usr/bin/test -v force-systemd-flag-656000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 06:19:26.981332   33682 oci.go:107] Successfully prepared a docker volume force-systemd-flag-656000
	I0415 06:19:26.981372   33682 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:19:26.981386   33682 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 06:19:26.981498   33682 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-656000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 06:25:26.328591   33682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:25:26.328711   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:26.382762   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:26.382882   33682 retry.go:31] will retry after 252.525739ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:26.637797   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:26.688471   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:26.688585   33682 retry.go:31] will retry after 455.983789ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:27.146916   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:27.198656   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:27.198766   33682 retry.go:31] will retry after 827.717182ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:28.028815   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:28.083031   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	W0415 06:25:28.083135   33682 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	
	W0415 06:25:28.083156   33682 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:28.083218   33682 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:25:28.083279   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:28.133978   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:28.134073   33682 retry.go:31] will retry after 371.209292ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:28.507663   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:28.561428   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:28.561524   33682 retry.go:31] will retry after 526.313059ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:29.088382   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:29.141850   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:29.141948   33682 retry.go:31] will retry after 754.247818ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:29.898586   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:29.952736   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	W0415 06:25:29.952850   33682 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	
	W0415 06:25:29.952869   33682 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:29.952881   33682 start.go:128] duration metric: took 6m3.647890961s to createHost
	I0415 06:25:29.952950   33682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:25:29.953011   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:30.002830   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:30.002922   33682 retry.go:31] will retry after 367.239547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:30.371957   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:30.425202   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:30.425300   33682 retry.go:31] will retry after 408.316411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:30.835999   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:30.888034   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:30.888133   33682 retry.go:31] will retry after 713.03797ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:31.603524   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:31.655146   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	W0415 06:25:31.655243   33682 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	
	W0415 06:25:31.655257   33682 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:31.655326   33682 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:25:31.655390   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:31.703787   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:31.703880   33682 retry.go:31] will retry after 233.680258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:31.939955   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:31.992679   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:31.992763   33682 retry.go:31] will retry after 348.050181ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:32.341615   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:32.392887   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:32.392988   33682 retry.go:31] will retry after 298.253949ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:32.693160   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:32.747754   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	I0415 06:25:32.747858   33682 retry.go:31] will retry after 490.466011ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:33.239690   33682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000
	W0415 06:25:33.291884   33682 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000 returned with exit code 1
	W0415 06:25:33.291985   33682 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	
	W0415 06:25:33.291998   33682 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-656000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-656000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	I0415 06:25:33.292008   33682 fix.go:56] duration metric: took 6m27.142549007s for fixHost
	I0415 06:25:33.292017   33682 start.go:83] releasing machines lock for "force-systemd-flag-656000", held for 6m27.142612781s
	W0415 06:25:33.292093   33682 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-656000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-656000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 06:25:33.335662   33682 out.go:177] 
	W0415 06:25:33.356421   33682 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 06:25:33.356464   33682 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 06:25:33.356507   33682 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 06:25:33.377470   33682 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-656000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-656000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-656000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (199.752561ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-656000 host status: state: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-656000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-15 06:25:33.673557 -0700 PDT m=+7000.807152238
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-656000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-656000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-656000",
	        "Id": "b101ab80f0b80bd995a51bd28438a001750d32e7baa479a721412957185d1a29",
	        "Created": "2024-04-15T13:19:26.563314358Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-656000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-656000 -n force-systemd-flag-656000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-656000 -n force-systemd-flag-656000: exit status 7 (112.719953ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 06:25:33.836052   34220 status.go:249] status error: host: state: unknown state "force-systemd-flag-656000": docker container inspect force-systemd-flag-656000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-656000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-656000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-656000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-656000
--- FAIL: TestForceSystemdFlag (757.72s)

                                                
                                    
x
+
TestForceSystemdEnv (752.29s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-830000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0415 06:02:44.628013   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 06:04:41.568572   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 06:04:54.138915   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 06:07:57.186810   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 06:09:41.557766   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 06:09:54.128086   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-830000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m31.19208468s)

                                                
                                                
-- stdout --
	* [force-systemd-env-830000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-830000" primary control-plane node in "force-systemd-env-830000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-830000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 06:00:53.435132   33004 out.go:291] Setting OutFile to fd 1 ...
	I0415 06:00:53.435324   33004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 06:00:53.435330   33004 out.go:304] Setting ErrFile to fd 2...
	I0415 06:00:53.435334   33004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 06:00:53.435524   33004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 06:00:53.436970   33004 out.go:298] Setting JSON to false
	I0415 06:00:53.459006   33004 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10823,"bootTime":1713175230,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 06:00:53.459093   33004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 06:00:53.480940   33004 out.go:177] * [force-systemd-env-830000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 06:00:53.522935   33004 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 06:00:53.522960   33004 notify.go:220] Checking for updates...
	I0415 06:00:53.566836   33004 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 06:00:53.587814   33004 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 06:00:53.608713   33004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 06:00:53.629903   33004 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	I0415 06:00:53.650842   33004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0415 06:00:53.674502   33004 config.go:182] Loaded profile config "offline-docker-615000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 06:00:53.674664   33004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 06:00:53.730927   33004 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 06:00:53.731096   33004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 06:00:53.841190   33004 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:102 OomKillDisable:false NGoroutines:193 SystemTime:2024-04-15 13:00:53.831388807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 06:00:53.883739   33004 out.go:177] * Using the docker driver based on user configuration
	I0415 06:00:53.904544   33004 start.go:297] selected driver: docker
	I0415 06:00:53.904554   33004 start.go:901] validating driver "docker" against <nil>
	I0415 06:00:53.904562   33004 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 06:00:53.907490   33004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 06:00:54.013535   33004 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:102 OomKillDisable:false NGoroutines:193 SystemTime:2024-04-15 13:00:54.004266295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 06:00:54.013746   33004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 06:00:54.013924   33004 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 06:00:54.035464   33004 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 06:00:54.057594   33004 cni.go:84] Creating CNI manager for ""
	I0415 06:00:54.057639   33004 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 06:00:54.057655   33004 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 06:00:54.057761   33004 start.go:340] cluster config:
	{Name:force-systemd-env-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-830000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 06:00:54.079336   33004 out.go:177] * Starting "force-systemd-env-830000" primary control-plane node in "force-systemd-env-830000" cluster
	I0415 06:00:54.121405   33004 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 06:00:54.143391   33004 out.go:177] * Pulling base image v0.0.43-1712854342-18621 ...
	I0415 06:00:54.185294   33004 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:00:54.185336   33004 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 06:00:54.185381   33004 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 06:00:54.185403   33004 cache.go:56] Caching tarball of preloaded images
	I0415 06:00:54.185654   33004 preload.go:173] Found /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 06:00:54.185674   33004 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 06:00:54.186678   33004 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/force-systemd-env-830000/config.json ...
	I0415 06:00:54.186828   33004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/force-systemd-env-830000/config.json: {Name:mk695a45927dae7d65a8f5091453da3ddbeb71bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 06:00:54.237096   33004 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon, skipping pull
	I0415 06:00:54.237113   33004 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in daemon, skipping load
	I0415 06:00:54.237137   33004 cache.go:194] Successfully downloaded all kic artifacts
	I0415 06:00:54.237188   33004 start.go:360] acquireMachinesLock for force-systemd-env-830000: {Name:mk253fd8edb9ac091ef7b5a544a884667638af4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 06:00:54.237505   33004 start.go:364] duration metric: took 304.631µs to acquireMachinesLock for "force-systemd-env-830000"
	I0415 06:00:54.237536   33004 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-830000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 06:00:54.237592   33004 start.go:125] createHost starting for "" (driver="docker")
	I0415 06:00:54.280453   33004 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 06:00:54.280863   33004 start.go:159] libmachine.API.Create for "force-systemd-env-830000" (driver="docker")
	I0415 06:00:54.280917   33004 client.go:168] LocalClient.Create starting
	I0415 06:00:54.281132   33004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 06:00:54.281235   33004 main.go:141] libmachine: Decoding PEM data...
	I0415 06:00:54.281265   33004 main.go:141] libmachine: Parsing certificate...
	I0415 06:00:54.281366   33004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 06:00:54.281440   33004 main.go:141] libmachine: Decoding PEM data...
	I0415 06:00:54.281457   33004 main.go:141] libmachine: Parsing certificate...
	I0415 06:00:54.282398   33004 cli_runner.go:164] Run: docker network inspect force-systemd-env-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 06:00:54.330576   33004 cli_runner.go:211] docker network inspect force-systemd-env-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 06:00:54.330686   33004 network_create.go:281] running [docker network inspect force-systemd-env-830000] to gather additional debugging logs...
	I0415 06:00:54.330703   33004 cli_runner.go:164] Run: docker network inspect force-systemd-env-830000
	W0415 06:00:54.378851   33004 cli_runner.go:211] docker network inspect force-systemd-env-830000 returned with exit code 1
	I0415 06:00:54.378882   33004 network_create.go:284] error running [docker network inspect force-systemd-env-830000]: docker network inspect force-systemd-env-830000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-830000 not found
	I0415 06:00:54.378901   33004 network_create.go:286] output of [docker network inspect force-systemd-env-830000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-830000 not found
	
	** /stderr **
	I0415 06:00:54.379055   33004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:00:54.429790   33004 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:00:54.431478   33004 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:00:54.432878   33004 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:00:54.433241   33004 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002298c80}
	I0415 06:00:54.433257   33004 network_create.go:124] attempt to create docker network force-systemd-env-830000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 06:00:54.433322   33004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-830000 force-systemd-env-830000
	W0415 06:00:54.481951   33004 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-830000 force-systemd-env-830000 returned with exit code 1
	W0415 06:00:54.481991   33004 network_create.go:149] failed to create docker network force-systemd-env-830000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-830000 force-systemd-env-830000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 06:00:54.482013   33004 network_create.go:116] failed to create docker network force-systemd-env-830000 192.168.76.0/24, will retry: subnet is taken
	I0415 06:00:54.483617   33004 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:00:54.484147   33004 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002339bd0}
	I0415 06:00:54.484178   33004 network_create.go:124] attempt to create docker network force-systemd-env-830000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 06:00:54.484279   33004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-830000 force-systemd-env-830000
	I0415 06:00:54.603557   33004 network_create.go:108] docker network force-systemd-env-830000 192.168.85.0/24 created
	I0415 06:00:54.603602   33004 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-830000" container
	I0415 06:00:54.603706   33004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 06:00:54.653633   33004 cli_runner.go:164] Run: docker volume create force-systemd-env-830000 --label name.minikube.sigs.k8s.io=force-systemd-env-830000 --label created_by.minikube.sigs.k8s.io=true
	I0415 06:00:54.702681   33004 oci.go:103] Successfully created a docker volume force-systemd-env-830000
	I0415 06:00:54.702803   33004 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-830000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-830000 --entrypoint /usr/bin/test -v force-systemd-env-830000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 06:00:55.022976   33004 oci.go:107] Successfully prepared a docker volume force-systemd-env-830000
	I0415 06:00:55.023012   33004 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:00:55.023027   33004 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 06:00:55.023130   33004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-830000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 06:06:54.270307   33004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:06:54.270442   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:06:54.323897   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:06:54.324028   33004 retry.go:31] will retry after 258.5956ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:06:54.584987   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:06:54.636989   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:06:54.637101   33004 retry.go:31] will retry after 536.754266ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:06:55.176246   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:06:55.229552   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:06:55.229711   33004 retry.go:31] will retry after 704.785806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:06:55.936455   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:06:55.989435   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	W0415 06:06:55.989548   33004 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	
	W0415 06:06:55.989567   33004 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:06:55.989625   33004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:06:55.989695   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:06:56.039647   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:06:56.039745   33004 retry.go:31] will retry after 192.075469ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:06:56.234232   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:06:56.286728   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:06:56.286820   33004 retry.go:31] will retry after 434.520422ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:06:56.723746   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:06:56.775508   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:06:56.775605   33004 retry.go:31] will retry after 410.967775ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:06:57.188932   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:06:57.241947   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	W0415 06:06:57.242060   33004 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	
	W0415 06:06:57.242077   33004 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:06:57.242095   33004 start.go:128] duration metric: took 6m3.017578073s to createHost
	I0415 06:06:57.242104   33004 start.go:83] releasing machines lock for "force-systemd-env-830000", held for 6m3.017679509s
	W0415 06:06:57.242118   33004 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 06:06:57.242551   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:06:57.290977   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:06:57.291027   33004 delete.go:82] Unable to get host status for force-systemd-env-830000, assuming it has already been deleted: state: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	W0415 06:06:57.291099   33004 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 06:06:57.291109   33004 start.go:728] Will try again in 5 seconds ...
	I0415 06:07:02.291476   33004 start.go:360] acquireMachinesLock for force-systemd-env-830000: {Name:mk253fd8edb9ac091ef7b5a544a884667638af4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 06:07:02.292351   33004 start.go:364] duration metric: took 812.853µs to acquireMachinesLock for "force-systemd-env-830000"
	I0415 06:07:02.292485   33004 start.go:96] Skipping create...Using existing machine configuration
	I0415 06:07:02.292503   33004 fix.go:54] fixHost starting: 
	I0415 06:07:02.293061   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:02.345820   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:07:02.345868   33004 fix.go:112] recreateIfNeeded on force-systemd-env-830000: state= err=unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:02.345888   33004 fix.go:117] machineExists: false. err=machine does not exist
	I0415 06:07:02.367806   33004 out.go:177] * docker "force-systemd-env-830000" container is missing, will recreate.
	I0415 06:07:02.390376   33004 delete.go:124] DEMOLISHING force-systemd-env-830000 ...
	I0415 06:07:02.390579   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:02.439839   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	W0415 06:07:02.439900   33004 stop.go:83] unable to get state: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:02.439919   33004 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:02.440315   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:02.488248   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:07:02.488300   33004 delete.go:82] Unable to get host status for force-systemd-env-830000, assuming it has already been deleted: state: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:02.488403   33004 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-830000
	W0415 06:07:02.536712   33004 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-830000 returned with exit code 1
	I0415 06:07:02.536750   33004 kic.go:371] could not find the container force-systemd-env-830000 to remove it. will try anyways
	I0415 06:07:02.536824   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:02.585010   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	W0415 06:07:02.585066   33004 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:02.585144   33004 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-830000 /bin/bash -c "sudo init 0"
	W0415 06:07:02.633637   33004 cli_runner.go:211] docker exec --privileged -t force-systemd-env-830000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 06:07:02.633668   33004 oci.go:650] error shutdown force-systemd-env-830000: docker exec --privileged -t force-systemd-env-830000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:03.634141   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:03.687524   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:07:03.687569   33004 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:03.687593   33004 oci.go:664] temporary error: container force-systemd-env-830000 status is  but expect it to be exited
	I0415 06:07:03.687617   33004 retry.go:31] will retry after 724.170815ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:04.414139   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:04.466534   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:07:04.466587   33004 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:04.466596   33004 oci.go:664] temporary error: container force-systemd-env-830000 status is  but expect it to be exited
	I0415 06:07:04.466630   33004 retry.go:31] will retry after 903.241653ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:05.370694   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:05.421782   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:07:05.421832   33004 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:05.421842   33004 oci.go:664] temporary error: container force-systemd-env-830000 status is  but expect it to be exited
	I0415 06:07:05.421867   33004 retry.go:31] will retry after 984.898451ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:06.407935   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:06.459550   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:07:06.459608   33004 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:06.459618   33004 oci.go:664] temporary error: container force-systemd-env-830000 status is  but expect it to be exited
	I0415 06:07:06.459643   33004 retry.go:31] will retry after 980.452064ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:07.441264   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:07.495229   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:07:07.495280   33004 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:07.495294   33004 oci.go:664] temporary error: container force-systemd-env-830000 status is  but expect it to be exited
	I0415 06:07:07.495324   33004 retry.go:31] will retry after 3.704361808s: couldn't verify container is exited. %v: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:11.199977   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:11.249315   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:07:11.249368   33004 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:11.249378   33004 oci.go:664] temporary error: container force-systemd-env-830000 status is  but expect it to be exited
	I0415 06:07:11.249405   33004 retry.go:31] will retry after 5.54156938s: couldn't verify container is exited. %v: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:16.792681   33004 cli_runner.go:164] Run: docker container inspect force-systemd-env-830000 --format={{.State.Status}}
	W0415 06:07:16.845572   33004 cli_runner.go:211] docker container inspect force-systemd-env-830000 --format={{.State.Status}} returned with exit code 1
	I0415 06:07:16.845622   33004 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:07:16.845635   33004 oci.go:664] temporary error: container force-systemd-env-830000 status is  but expect it to be exited
	I0415 06:07:16.845669   33004 oci.go:88] couldn't shut down force-systemd-env-830000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	 
	I0415 06:07:16.845749   33004 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-830000
	I0415 06:07:16.894035   33004 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-830000
	W0415 06:07:16.964765   33004 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-830000 returned with exit code 1
	I0415 06:07:16.964880   33004 cli_runner.go:164] Run: docker network inspect force-systemd-env-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:07:17.014514   33004 cli_runner.go:164] Run: docker network rm force-systemd-env-830000
	I0415 06:07:17.120781   33004 fix.go:124] Sleeping 1 second for extra luck!
	I0415 06:07:18.122883   33004 start.go:125] createHost starting for "" (driver="docker")
	I0415 06:07:18.145058   33004 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0415 06:07:18.145237   33004 start.go:159] libmachine.API.Create for "force-systemd-env-830000" (driver="docker")
	I0415 06:07:18.145262   33004 client.go:168] LocalClient.Create starting
	I0415 06:07:18.145484   33004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 06:07:18.145595   33004 main.go:141] libmachine: Decoding PEM data...
	I0415 06:07:18.145620   33004 main.go:141] libmachine: Parsing certificate...
	I0415 06:07:18.145702   33004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 06:07:18.145777   33004 main.go:141] libmachine: Decoding PEM data...
	I0415 06:07:18.145792   33004 main.go:141] libmachine: Parsing certificate...
	I0415 06:07:18.167399   33004 cli_runner.go:164] Run: docker network inspect force-systemd-env-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 06:07:18.219125   33004 cli_runner.go:211] docker network inspect force-systemd-env-830000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 06:07:18.219224   33004 network_create.go:281] running [docker network inspect force-systemd-env-830000] to gather additional debugging logs...
	I0415 06:07:18.219245   33004 cli_runner.go:164] Run: docker network inspect force-systemd-env-830000
	W0415 06:07:18.267021   33004 cli_runner.go:211] docker network inspect force-systemd-env-830000 returned with exit code 1
	I0415 06:07:18.267054   33004 network_create.go:284] error running [docker network inspect force-systemd-env-830000]: docker network inspect force-systemd-env-830000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-830000 not found
	I0415 06:07:18.267065   33004 network_create.go:286] output of [docker network inspect force-systemd-env-830000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-830000 not found
	
	** /stderr **
	I0415 06:07:18.267206   33004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 06:07:18.317077   33004 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:07:18.318646   33004 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:07:18.320267   33004 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:07:18.321835   33004 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:07:18.323250   33004 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:07:18.324556   33004 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 06:07:18.324963   33004 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002299670}
	I0415 06:07:18.324979   33004 network_create.go:124] attempt to create docker network force-systemd-env-830000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0415 06:07:18.325048   33004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-830000 force-systemd-env-830000
	I0415 06:07:18.410083   33004 network_create.go:108] docker network force-systemd-env-830000 192.168.103.0/24 created
	I0415 06:07:18.410123   33004 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-830000" container
	I0415 06:07:18.410226   33004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 06:07:18.460360   33004 cli_runner.go:164] Run: docker volume create force-systemd-env-830000 --label name.minikube.sigs.k8s.io=force-systemd-env-830000 --label created_by.minikube.sigs.k8s.io=true
	I0415 06:07:18.508915   33004 oci.go:103] Successfully created a docker volume force-systemd-env-830000
	I0415 06:07:18.509033   33004 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-830000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-830000 --entrypoint /usr/bin/test -v force-systemd-env-830000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 06:07:18.744419   33004 oci.go:107] Successfully prepared a docker volume force-systemd-env-830000
	I0415 06:07:18.744467   33004 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 06:07:18.744480   33004 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 06:07:18.744592   33004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-830000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 06:13:18.134635   33004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:13:18.134835   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:18.187305   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:18.187423   33004 retry.go:31] will retry after 308.482895ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:18.496809   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:18.547869   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:18.547989   33004 retry.go:31] will retry after 229.516716ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:18.779936   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:18.830966   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:18.831065   33004 retry.go:31] will retry after 568.965402ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:19.402397   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:19.452790   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:19.452888   33004 retry.go:31] will retry after 543.797103ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:19.999109   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:20.049904   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	W0415 06:13:20.050025   33004 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	
	W0415 06:13:20.050043   33004 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:20.050100   33004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:13:20.050168   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:20.098379   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:20.098485   33004 retry.go:31] will retry after 269.371348ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:20.369676   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:20.424493   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:20.424601   33004 retry.go:31] will retry after 457.186553ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:20.883847   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:20.934234   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:20.934339   33004 retry.go:31] will retry after 713.494214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:21.648360   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:21.701609   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	W0415 06:13:21.701740   33004 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	
	W0415 06:13:21.701758   33004 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:21.701769   33004 start.go:128] duration metric: took 6m3.591966574s to createHost
	I0415 06:13:21.701865   33004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 06:13:21.701981   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:21.752306   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:21.752408   33004 retry.go:31] will retry after 148.574857ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:21.902056   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:21.952208   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:21.952299   33004 retry.go:31] will retry after 276.412786ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:22.231090   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:22.282778   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:22.282886   33004 retry.go:31] will retry after 784.802795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:23.070068   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:23.123338   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	W0415 06:13:23.123440   33004 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	
	W0415 06:13:23.123454   33004 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:23.123519   33004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 06:13:23.123587   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:23.171804   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:23.171898   33004 retry.go:31] will retry after 288.149004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:23.462518   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:23.515001   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:23.515098   33004 retry.go:31] will retry after 229.495068ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:23.746974   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:23.799731   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	I0415 06:13:23.799833   33004 retry.go:31] will retry after 535.534236ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:24.337756   33004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000
	W0415 06:13:24.391846   33004 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000 returned with exit code 1
	W0415 06:13:24.391959   33004 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	
	W0415 06:13:24.391972   33004 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-830000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-830000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	I0415 06:13:24.391982   33004 fix.go:56] duration metric: took 6m22.113257822s for fixHost
	I0415 06:13:24.391991   33004 start.go:83] releasing machines lock for "force-systemd-env-830000", held for 6m22.113323592s
	W0415 06:13:24.392067   33004 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-830000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-830000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 06:13:24.435571   33004 out.go:177] 
	W0415 06:13:24.456513   33004 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 06:13:24.456558   33004 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 06:13:24.456587   33004 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 06:13:24.477636   33004 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-830000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-830000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-830000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (197.982532ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-830000 host status: state: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-830000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-15 06:13:24.749969 -0700 PDT m=+6271.942622047
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-830000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-830000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-830000",
	        "Id": "64c69e59d4d9f7a379fe616f590823c03430c3d69a9f9bfa91f6cdf6384859fc",
	        "Created": "2024-04-15T13:07:18.371268182Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-830000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-830000 -n force-systemd-env-830000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-830000 -n force-systemd-env-830000: exit status 7 (114.072954ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 06:13:24.914292   33787 status.go:249] status error: host: state: unknown state "force-systemd-env-830000": docker container inspect force-systemd-env-830000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-830000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-830000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-830000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-830000
--- FAIL: TestForceSystemdEnv (752.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (882.76s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-001000 ssh -- ls /minikube-host
E0415 04:59:41.524704   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:59:54.094043   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:01:17.142722   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:04:41.523747   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:04:54.095730   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:09:41.524317   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:09:54.094960   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-001000 ssh -- ls /minikube-host: signal: killed (14m42.322100355s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-001000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostDelete]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-001000
helpers_test.go:235: (dbg) docker inspect mount-start-2-001000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80074bbf93b46594611dd4f69bed90633424a7a7684131027f54af8397894d4a",
	        "Created": "2024-04-15T11:57:06.926507678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470341,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-15T11:57:07.086632521Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8e3065bd048af0808d8ea937179eac2f6aaaa6840181cae82f858bfe4571416c",
	        "ResolvConfPath": "/var/lib/docker/containers/80074bbf93b46594611dd4f69bed90633424a7a7684131027f54af8397894d4a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80074bbf93b46594611dd4f69bed90633424a7a7684131027f54af8397894d4a/hostname",
	        "HostsPath": "/var/lib/docker/containers/80074bbf93b46594611dd4f69bed90633424a7a7684131027f54af8397894d4a/hosts",
	        "LogPath": "/var/lib/docker/containers/80074bbf93b46594611dd4f69bed90633424a7a7684131027f54af8397894d4a/80074bbf93b46594611dd4f69bed90633424a7a7684131027f54af8397894d4a-json.log",
	        "Name": "/mount-start-2-001000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-001000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-001000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0571e373255a0535c5a25ef6b1f815ec54e017395065c0a711063d48b8624d17-init/diff:/var/lib/docker/overlay2/8198eec87fd805ecb990c432b1fcd123d0aa07faf2f8dfa595be77373eabd6d6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0571e373255a0535c5a25ef6b1f815ec54e017395065c0a711063d48b8624d17/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0571e373255a0535c5a25ef6b1f815ec54e017395065c0a711063d48b8624d17/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0571e373255a0535c5a25ef6b1f815ec54e017395065c0a711063d48b8624d17/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-001000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-001000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-001000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-001000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-001000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "202029981ffb44b69d49bb4b7155c0c468dcdc552b5e1231ae0beb641855ca4f",
	            "SandboxKey": "/var/run/docker/netns/202029981ffb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57887"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57888"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57889"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-001000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "0c1f5f8eb90e3d84087d8ddc34f639bcac01e39787d266bc72328a5bfcc060ed",
	                    "EndpointID": "7d10556414f1c29c9fed61bdd34c79645e70d7b12c04b6514b757eb64b75c1a9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-2-001000",
	                        "80074bbf93b4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-001000 -n mount-start-2-001000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-001000 -n mount-start-2-001000: exit status 6 (382.266872ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:11:57.804226   30392 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-001000" does not appear in /Users/jenkins/minikube-integration/18644-22866/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-001000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountPostDelete (882.76s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (755.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-701000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0415 05:14:41.525186   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:14:54.094724   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:17:57.146288   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:19:41.526090   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:19:54.096163   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:24:41.569595   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:24:54.140390   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-701000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m35.78975848s)

                                                
                                                
-- stdout --
	* [multinode-701000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-701000" primary control-plane node in "multinode-701000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-701000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:13:06.848248   30493 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:13:06.848515   30493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:13:06.848520   30493 out.go:304] Setting ErrFile to fd 2...
	I0415 05:13:06.848524   30493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:13:06.848704   30493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:13:06.850218   30493 out.go:298] Setting JSON to false
	I0415 05:13:06.872331   30493 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7956,"bootTime":1713175230,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 05:13:06.872431   30493 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:13:06.894658   30493 out.go:177] * [multinode-701000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 05:13:06.916665   30493 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:13:06.916671   30493 notify.go:220] Checking for updates...
	I0415 05:13:06.939550   30493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 05:13:06.962263   30493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 05:13:06.983473   30493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:13:07.004542   30493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	I0415 05:13:07.025236   30493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:13:07.046958   30493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:13:07.103010   30493 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 05:13:07.103176   30493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 05:13:07.207734   30493 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:103 SystemTime:2024-04-15 12:13:07.196931013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 05:13:07.250295   30493 out.go:177] * Using the docker driver based on user configuration
	I0415 05:13:07.271554   30493 start.go:297] selected driver: docker
	I0415 05:13:07.271585   30493 start.go:901] validating driver "docker" against <nil>
	I0415 05:13:07.271600   30493 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:13:07.275961   30493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 05:13:07.384205   30493 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:103 SystemTime:2024-04-15 12:13:07.373043758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 05:13:07.384395   30493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 05:13:07.384571   30493 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:13:07.406463   30493 out.go:177] * Using Docker Desktop driver with root privileges
	I0415 05:13:07.428173   30493 cni.go:84] Creating CNI manager for ""
	I0415 05:13:07.428205   30493 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 05:13:07.428222   30493 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 05:13:07.428344   30493 start.go:340] cluster config:
	{Name:multinode-701000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:13:07.450207   30493 out.go:177] * Starting "multinode-701000" primary control-plane node in "multinode-701000" cluster
	I0415 05:13:07.491968   30493 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 05:13:07.513206   30493 out.go:177] * Pulling base image v0.0.43-1712854342-18621 ...
	I0415 05:13:07.555048   30493 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:13:07.555100   30493 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 05:13:07.555130   30493 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 05:13:07.555149   30493 cache.go:56] Caching tarball of preloaded images
	I0415 05:13:07.555357   30493 preload.go:173] Found /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 05:13:07.555376   30493 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:13:07.556983   30493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/multinode-701000/config.json ...
	I0415 05:13:07.557120   30493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/multinode-701000/config.json: {Name:mkad7d23e850d3d1ba5485b9ea6624582e0c3838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 05:13:07.606459   30493 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon, skipping pull
	I0415 05:13:07.606475   30493 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in daemon, skipping load
	I0415 05:13:07.606495   30493 cache.go:194] Successfully downloaded all kic artifacts
	I0415 05:13:07.606552   30493 start.go:360] acquireMachinesLock for multinode-701000: {Name:mk2f276f5ed2de5433c43cfc6c1200ad22d6fb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:13:07.606722   30493 start.go:364] duration metric: took 157.316µs to acquireMachinesLock for "multinode-701000"
	I0415 05:13:07.606752   30493 start.go:93] Provisioning new machine with config: &{Name:multinode-701000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-701000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 05:13:07.606802   30493 start.go:125] createHost starting for "" (driver="docker")
	I0415 05:13:07.628262   30493 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 05:13:07.628618   30493 start.go:159] libmachine.API.Create for "multinode-701000" (driver="docker")
	I0415 05:13:07.628658   30493 client.go:168] LocalClient.Create starting
	I0415 05:13:07.628920   30493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 05:13:07.629024   30493 main.go:141] libmachine: Decoding PEM data...
	I0415 05:13:07.629053   30493 main.go:141] libmachine: Parsing certificate...
	I0415 05:13:07.629141   30493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 05:13:07.629224   30493 main.go:141] libmachine: Decoding PEM data...
	I0415 05:13:07.629238   30493 main.go:141] libmachine: Parsing certificate...
	I0415 05:13:07.649578   30493 cli_runner.go:164] Run: docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 05:13:07.699369   30493 cli_runner.go:211] docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 05:13:07.699463   30493 network_create.go:281] running [docker network inspect multinode-701000] to gather additional debugging logs...
	I0415 05:13:07.699476   30493 cli_runner.go:164] Run: docker network inspect multinode-701000
	W0415 05:13:07.747903   30493 cli_runner.go:211] docker network inspect multinode-701000 returned with exit code 1
	I0415 05:13:07.747929   30493 network_create.go:284] error running [docker network inspect multinode-701000]: docker network inspect multinode-701000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-701000 not found
	I0415 05:13:07.747938   30493 network_create.go:286] output of [docker network inspect multinode-701000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-701000 not found
	
	** /stderr **
	I0415 05:13:07.748068   30493 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 05:13:07.798052   30493 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:13:07.799649   30493 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:13:07.799994   30493 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022b1200}
	I0415 05:13:07.800014   30493 network_create.go:124] attempt to create docker network multinode-701000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 05:13:07.800093   30493 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000
	I0415 05:13:07.885836   30493 network_create.go:108] docker network multinode-701000 192.168.67.0/24 created
	I0415 05:13:07.885873   30493 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-701000" container
	I0415 05:13:07.885978   30493 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 05:13:07.934696   30493 cli_runner.go:164] Run: docker volume create multinode-701000 --label name.minikube.sigs.k8s.io=multinode-701000 --label created_by.minikube.sigs.k8s.io=true
	I0415 05:13:07.984032   30493 oci.go:103] Successfully created a docker volume multinode-701000
	I0415 05:13:07.984146   30493 cli_runner.go:164] Run: docker run --rm --name multinode-701000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-701000 --entrypoint /usr/bin/test -v multinode-701000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 05:13:08.301947   30493 oci.go:107] Successfully prepared a docker volume multinode-701000
	I0415 05:13:08.301988   30493 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:13:08.302004   30493 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 05:13:08.302100   30493 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-701000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 05:19:07.631498   30493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 05:19:07.631631   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:19:07.683550   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:19:07.683672   30493 retry.go:31] will retry after 310.428964ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:07.996516   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:19:08.051849   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:19:08.051940   30493 retry.go:31] will retry after 246.13835ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:08.300476   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:19:08.348542   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:19:08.348642   30493 retry.go:31] will retry after 354.143586ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:08.705066   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:19:08.758605   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:19:08.758722   30493 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:19:08.758746   30493 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:08.758803   30493 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 05:19:08.758860   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:19:08.809012   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:19:08.809116   30493 retry.go:31] will retry after 234.807214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:09.044599   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:19:09.097362   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:19:09.097450   30493 retry.go:31] will retry after 437.511761ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:09.535696   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:19:09.588766   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:19:09.588859   30493 retry.go:31] will retry after 357.688418ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:09.948252   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:19:10.001369   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:19:10.001476   30493 retry.go:31] will retry after 664.612399ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:10.666496   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:19:10.716872   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:19:10.716971   30493 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:19:10.716990   30493 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:10.717008   30493 start.go:128] duration metric: took 6m3.10973452s to createHost
	I0415 05:19:10.717015   30493 start.go:83] releasing machines lock for "multinode-701000", held for 6m3.109829204s
	W0415 05:19:10.717031   30493 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0415 05:19:10.717460   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:10.765358   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:10.765417   30493 delete.go:82] Unable to get host status for multinode-701000, assuming it has already been deleted: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	W0415 05:19:10.765524   30493 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0415 05:19:10.765532   30493 start.go:728] Will try again in 5 seconds ...
	I0415 05:19:15.765970   30493 start.go:360] acquireMachinesLock for multinode-701000: {Name:mk2f276f5ed2de5433c43cfc6c1200ad22d6fb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:19:15.766218   30493 start.go:364] duration metric: took 196.43µs to acquireMachinesLock for "multinode-701000"
	I0415 05:19:15.766258   30493 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:19:15.766277   30493 fix.go:54] fixHost starting: 
	I0415 05:19:15.766777   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:15.818997   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:15.819045   30493 fix.go:112] recreateIfNeeded on multinode-701000: state= err=unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:15.819063   30493 fix.go:117] machineExists: false. err=machine does not exist
	I0415 05:19:15.841003   30493 out.go:177] * docker "multinode-701000" container is missing, will recreate.
	I0415 05:19:15.884213   30493 delete.go:124] DEMOLISHING multinode-701000 ...
	I0415 05:19:15.884395   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:15.933884   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	W0415 05:19:15.933945   30493 stop.go:83] unable to get state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:15.933965   30493 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:15.934354   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:15.982663   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:15.982721   30493 delete.go:82] Unable to get host status for multinode-701000, assuming it has already been deleted: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:15.982812   30493 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-701000
	W0415 05:19:16.031052   30493 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-701000 returned with exit code 1
	I0415 05:19:16.031092   30493 kic.go:371] could not find the container multinode-701000 to remove it. will try anyways
	I0415 05:19:16.031172   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:16.079367   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	W0415 05:19:16.079411   30493 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:16.079494   30493 cli_runner.go:164] Run: docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0"
	W0415 05:19:16.128121   30493 cli_runner.go:211] docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 05:19:16.128156   30493 oci.go:650] error shutdown multinode-701000: docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:17.128637   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:17.180822   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:17.180866   30493 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:17.180883   30493 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:19:17.180910   30493 retry.go:31] will retry after 444.327726ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:17.625647   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:17.678227   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:17.678272   30493 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:17.678283   30493 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:19:17.678309   30493 retry.go:31] will retry after 607.752113ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:18.288461   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:18.340527   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:18.340578   30493 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:18.340587   30493 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:19:18.340611   30493 retry.go:31] will retry after 1.006164023s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:19.349168   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:19.400232   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:19.400278   30493 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:19.400286   30493 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:19:19.400314   30493 retry.go:31] will retry after 1.861832406s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:21.264487   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:21.316834   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:21.316884   30493 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:21.316897   30493 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:19:21.316937   30493 retry.go:31] will retry after 1.493265896s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:22.811094   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:22.859992   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:22.860037   30493 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:22.860048   30493 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:19:22.860071   30493 retry.go:31] will retry after 3.966043033s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:26.828540   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:26.883784   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:26.883836   30493 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:26.883845   30493 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:19:26.883866   30493 retry.go:31] will retry after 3.561497087s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:30.447775   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:30.499954   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:30.500000   30493 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:30.500012   30493 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:19:30.500035   30493 retry.go:31] will retry after 5.069705565s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:35.571722   30493 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:19:35.623021   30493 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:19:35.623065   30493 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:19:35.623074   30493 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:19:35.623106   30493 oci.go:88] couldn't shut down multinode-701000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	 
	I0415 05:19:35.623183   30493 cli_runner.go:164] Run: docker rm -f -v multinode-701000
	I0415 05:19:35.673593   30493 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-701000
	W0415 05:19:35.720911   30493 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-701000 returned with exit code 1
	I0415 05:19:35.721022   30493 cli_runner.go:164] Run: docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 05:19:35.769709   30493 cli_runner.go:164] Run: docker network rm multinode-701000
	I0415 05:19:35.865351   30493 fix.go:124] Sleeping 1 second for extra luck!
	I0415 05:19:36.867581   30493 start.go:125] createHost starting for "" (driver="docker")
	I0415 05:19:36.896483   30493 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 05:19:36.896719   30493 start.go:159] libmachine.API.Create for "multinode-701000" (driver="docker")
	I0415 05:19:36.896761   30493 client.go:168] LocalClient.Create starting
	I0415 05:19:36.896980   30493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 05:19:36.897081   30493 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:36.897106   30493 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:36.897206   30493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 05:19:36.897281   30493 main.go:141] libmachine: Decoding PEM data...
	I0415 05:19:36.897298   30493 main.go:141] libmachine: Parsing certificate...
	I0415 05:19:36.918029   30493 cli_runner.go:164] Run: docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 05:19:36.969006   30493 cli_runner.go:211] docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 05:19:36.969108   30493 network_create.go:281] running [docker network inspect multinode-701000] to gather additional debugging logs...
	I0415 05:19:36.969137   30493 cli_runner.go:164] Run: docker network inspect multinode-701000
	W0415 05:19:37.018531   30493 cli_runner.go:211] docker network inspect multinode-701000 returned with exit code 1
	I0415 05:19:37.018560   30493 network_create.go:284] error running [docker network inspect multinode-701000]: docker network inspect multinode-701000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-701000 not found
	I0415 05:19:37.018572   30493 network_create.go:286] output of [docker network inspect multinode-701000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-701000 not found
	
	** /stderr **
	I0415 05:19:37.018688   30493 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 05:19:37.069110   30493 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:19:37.070680   30493 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:19:37.072095   30493 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:19:37.072454   30493 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00079d160}
	I0415 05:19:37.072468   30493 network_create.go:124] attempt to create docker network multinode-701000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 05:19:37.072537   30493 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000
	W0415 05:19:37.120713   30493 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000 returned with exit code 1
	W0415 05:19:37.120747   30493 network_create.go:149] failed to create docker network multinode-701000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 05:19:37.120771   30493 network_create.go:116] failed to create docker network multinode-701000 192.168.76.0/24, will retry: subnet is taken
	I0415 05:19:37.122365   30493 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:19:37.122857   30493 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ed6090}
	I0415 05:19:37.122872   30493 network_create.go:124] attempt to create docker network multinode-701000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 05:19:37.123001   30493 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000
	I0415 05:19:37.207626   30493 network_create.go:108] docker network multinode-701000 192.168.85.0/24 created
	I0415 05:19:37.207664   30493 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-701000" container
	I0415 05:19:37.207777   30493 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 05:19:37.256551   30493 cli_runner.go:164] Run: docker volume create multinode-701000 --label name.minikube.sigs.k8s.io=multinode-701000 --label created_by.minikube.sigs.k8s.io=true
	I0415 05:19:37.304749   30493 oci.go:103] Successfully created a docker volume multinode-701000
	I0415 05:19:37.304854   30493 cli_runner.go:164] Run: docker run --rm --name multinode-701000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-701000 --entrypoint /usr/bin/test -v multinode-701000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 05:19:37.548370   30493 oci.go:107] Successfully prepared a docker volume multinode-701000
	I0415 05:19:37.548417   30493 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:19:37.548430   30493 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 05:19:37.548529   30493 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-701000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 05:25:36.940906   30493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 05:25:36.941033   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:36.994612   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:36.994727   30493 retry.go:31] will retry after 180.728129ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:37.177827   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:37.229652   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:37.229767   30493 retry.go:31] will retry after 275.960501ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:37.508111   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:37.558540   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:37.558634   30493 retry.go:31] will retry after 510.536587ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:38.070858   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:38.123880   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:25:38.123996   30493 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:25:38.124014   30493 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:38.124069   30493 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 05:25:38.124136   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:38.172752   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:38.172851   30493 retry.go:31] will retry after 365.407906ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:38.539161   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:38.593819   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:38.593922   30493 retry.go:31] will retry after 187.947385ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:38.782542   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:38.834441   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:38.834539   30493 retry.go:31] will retry after 558.548002ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:39.395478   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:39.447653   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:25:39.447765   30493 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:25:39.447780   30493 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:39.447806   30493 start.go:128] duration metric: took 6m2.531762348s to createHost
	I0415 05:25:39.447871   30493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 05:25:39.447924   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:39.496747   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:39.496844   30493 retry.go:31] will retry after 344.999938ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:39.844168   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:39.895949   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:39.896044   30493 retry.go:31] will retry after 350.603361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:40.248990   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:40.301777   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:40.301874   30493 retry.go:31] will retry after 815.579996ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:41.118564   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:41.170348   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:25:41.170452   30493 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:25:41.170473   30493 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:41.170537   30493 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 05:25:41.170601   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:41.219530   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:41.219624   30493 retry.go:31] will retry after 316.878647ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:41.538869   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:41.592187   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:41.592283   30493 retry.go:31] will retry after 322.132304ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:41.916274   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:41.967460   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:25:41.967553   30493 retry.go:31] will retry after 447.462162ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:42.417429   30493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:25:42.469097   30493 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:25:42.469193   30493 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:25:42.469212   30493 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:25:42.469223   30493 fix.go:56] duration metric: took 6m26.659824027s for fixHost
	I0415 05:25:42.469229   30493 start.go:83] releasing machines lock for "multinode-701000", held for 6m26.659871537s
	W0415 05:25:42.469311   30493 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-701000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-701000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 05:25:42.511742   30493 out.go:177] 
	W0415 05:25:42.532951   30493 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 05:25:42.533003   30493 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 05:25:42.533037   30493 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 05:25:42.575775   30493 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-701000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "d713e4b43e2e6b1e23f3c061f7f248be3638cac07503f6f749e2ad5a1aa6eeb4",
	        "Created": "2024-04-15T12:19:37.168154568Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (113.256577ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:25:42.796046   30757 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (755.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (94.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (100.908629ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-701000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- rollout status deployment/busybox: exit status 1 (101.024591ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.679032ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.531115ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.413342ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.87193ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.705937ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.669821ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.065508ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.329437ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.743117ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.869666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.870067ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (102.174271ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- exec  -- nslookup kubernetes.io: exit status 1 (102.40009ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- exec  -- nslookup kubernetes.default: exit status 1 (101.779187ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (101.381559ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "d713e4b43e2e6b1e23f3c061f7f248be3638cac07503f6f749e2ad5a1aa6eeb4",
	        "Created": "2024-04-15T12:19:37.168154568Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (112.812606ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:27:17.570381   30826 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (94.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-701000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (101.542134ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-701000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "d713e4b43e2e6b1e23f3c061f7f248be3638cac07503f6f749e2ad5a1aa6eeb4",
	        "Created": "2024-04-15T12:19:37.168154568Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (113.702195ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:27:17.838076   30835 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-701000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-701000 -v 3 --alsologtostderr: exit status 80 (198.957313ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:17.900478   30839 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:17.901183   30839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:17.901192   30839 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:17.901199   30839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:17.901633   30839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:17.902315   30839 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:17.902596   30839 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:17.902956   30839 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:17.951027   30839 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:17.973115   30839 out.go:177] 
	W0415 05:27:17.994828   30839 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-701000 host status: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-701000 host status: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	I0415 05:27:18.015750   30839 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-701000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "d713e4b43e2e6b1e23f3c061f7f248be3638cac07503f6f749e2ad5a1aa6eeb4",
	        "Created": "2024-04-15T12:19:37.168154568Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (114.025274ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:27:18.204261   30845 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-701000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-701000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (36.587043ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-701000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-701000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-701000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "d713e4b43e2e6b1e23f3c061f7f248be3638cac07503f6f749e2ad5a1aa6eeb4",
	        "Created": "2024-04-15T12:19:37.168154568Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (113.219482ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:27:18.406462   30852 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-701000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-001000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-701000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-701000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-701000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "d713e4b43e2e6b1e23f3c061f7f248be3638cac07503f6f749e2ad5a1aa6eeb4",
	        "Created": "2024-04-15T12:19:37.168154568Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (114.536228ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:27:18.758749   30864 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status --output json --alsologtostderr: exit status 7 (112.239383ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-701000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:18.821371   30868 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:18.821674   30868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:18.821679   30868 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:18.821683   30868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:18.821876   30868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:18.822064   30868 out.go:298] Setting JSON to true
	I0415 05:27:18.822088   30868 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:18.822118   30868 notify.go:220] Checking for updates...
	I0415 05:27:18.822384   30868 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:18.822399   30868 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:18.822787   30868 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:18.871124   30868 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:18.871177   30868 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:18.871201   30868 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:18.871219   30868 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:18.871227   30868 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-701000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "d713e4b43e2e6b1e23f3c061f7f248be3638cac07503f6f749e2ad5a1aa6eeb4",
	        "Created": "2024-04-15T12:19:37.168154568Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (113.655786ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:27:19.037650   30874 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 node stop m03: exit status 85 (156.969938ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-701000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status: exit status 7 (112.851433ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:27:19.308306   30880 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:19.308319   30880 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr: exit status 7 (113.734605ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:19.370533   30884 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:19.371203   30884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:19.371244   30884 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:19.371249   30884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:19.371651   30884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:19.372033   30884 out.go:298] Setting JSON to false
	I0415 05:27:19.372060   30884 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:19.372103   30884 notify.go:220] Checking for updates...
	I0415 05:27:19.372312   30884 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:19.372330   30884 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:19.372716   30884 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:19.422007   30884 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:19.422075   30884 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:19.422095   30884 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:19.422115   30884 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:19.422129   30884 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr": multinode-701000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr": multinode-701000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr": multinode-701000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "d713e4b43e2e6b1e23f3c061f7f248be3638cac07503f6f749e2ad5a1aa6eeb4",
	        "Created": "2024-04-15T12:19:37.168154568Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (113.270361ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:27:19.587703   30890 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 node start m03 -v=7 --alsologtostderr: exit status 85 (156.010985ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:19.649949   30894 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:19.651383   30894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:19.651392   30894 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:19.651396   30894 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:19.651578   30894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:19.651919   30894 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:19.652177   30894 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:19.673296   30894 out.go:177] 
	W0415 05:27:19.695262   30894 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0415 05:27:19.695286   30894 out.go:239] * 
	* 
	W0415 05:27:19.700723   30894 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0415 05:27:19.722014   30894 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0415 05:27:19.649949   30894 out.go:291] Setting OutFile to fd 1 ...
I0415 05:27:19.651383   30894 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 05:27:19.651392   30894 out.go:304] Setting ErrFile to fd 2...
I0415 05:27:19.651396   30894 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 05:27:19.651578   30894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
I0415 05:27:19.651919   30894 mustload.go:65] Loading cluster: multinode-701000
I0415 05:27:19.652177   30894 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 05:27:19.673296   30894 out.go:177] 
W0415 05:27:19.695262   30894 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0415 05:27:19.695286   30894 out.go:239] * 
* 
W0415 05:27:19.700723   30894 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0415 05:27:19.722014   30894 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-701000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr: exit status 7 (113.018921ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:19.806747   30896 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:19.807505   30896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:19.807513   30896 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:19.807519   30896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:19.807924   30896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:19.808313   30896 out.go:298] Setting JSON to false
	I0415 05:27:19.808343   30896 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:19.808379   30896 notify.go:220] Checking for updates...
	I0415 05:27:19.808598   30896 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:19.808613   30896 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:19.809022   30896 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:19.857059   30896 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:19.857116   30896 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:19.857138   30896 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:19.857155   30896 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:19.857163   30896 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr: exit status 7 (118.890797ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:21.097080   30900 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:21.097421   30900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:21.097426   30900 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:21.097430   30900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:21.097650   30900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:21.097855   30900 out.go:298] Setting JSON to false
	I0415 05:27:21.097899   30900 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:21.097931   30900 notify.go:220] Checking for updates...
	I0415 05:27:21.098203   30900 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:21.098231   30900 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:21.098784   30900 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:21.149529   30900 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:21.149583   30900 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:21.149602   30900 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:21.149620   30900 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:21.149626   30900 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr: exit status 7 (120.368192ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:22.108622   30904 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:22.108857   30904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:22.108862   30904 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:22.108866   30904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:22.109076   30904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:22.109272   30904 out.go:298] Setting JSON to false
	I0415 05:27:22.109303   30904 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:22.109337   30904 notify.go:220] Checking for updates...
	I0415 05:27:22.109584   30904 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:22.109599   30904 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:22.109986   30904 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:22.160932   30904 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:22.160985   30904 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:22.161006   30904 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:22.161022   30904 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:22.161028   30904 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr: exit status 7 (118.51045ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:25.138530   30908 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:25.138765   30908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:25.138771   30908 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:25.138774   30908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:25.138952   30908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:25.139146   30908 out.go:298] Setting JSON to false
	I0415 05:27:25.139175   30908 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:25.139217   30908 notify.go:220] Checking for updates...
	I0415 05:27:25.139448   30908 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:25.139464   30908 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:25.139850   30908 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:25.188271   30908 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:25.188334   30908 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:25.188352   30908 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:25.188368   30908 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:25.188376   30908 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr: exit status 7 (122.657438ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:29.418464   30915 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:29.418681   30915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:29.418687   30915 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:29.418690   30915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:29.418885   30915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:29.419068   30915 out.go:298] Setting JSON to false
	I0415 05:27:29.419092   30915 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:29.419125   30915 notify.go:220] Checking for updates...
	I0415 05:27:29.419398   30915 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:29.419412   30915 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:29.419834   30915 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:29.471518   30915 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:29.471572   30915 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:29.471589   30915 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:29.471605   30915 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:29.471613   30915 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr: exit status 7 (117.961553ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:36.699060   30919 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:36.699260   30919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:36.699269   30919 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:36.699273   30919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:36.699467   30919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:36.699647   30919 out.go:298] Setting JSON to false
	I0415 05:27:36.699670   30919 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:36.699709   30919 notify.go:220] Checking for updates...
	I0415 05:27:36.699980   30919 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:36.699994   30919 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:36.700384   30919 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:36.750855   30919 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:36.750906   30919 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:36.750934   30919 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:36.750952   30919 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:36.750958   30919 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr: exit status 7 (118.051186ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:40.891239   30926 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:40.891980   30926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:40.891988   30926 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:40.891994   30926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:40.892609   30926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:40.892839   30926 out.go:298] Setting JSON to false
	I0415 05:27:40.892883   30926 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:40.892918   30926 notify.go:220] Checking for updates...
	I0415 05:27:40.893182   30926 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:40.893199   30926 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:40.893567   30926 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:40.943981   30926 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:40.944044   30926 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:40.944062   30926 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:40.944082   30926 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:40.944091   30926 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr: exit status 7 (116.099199ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:27:55.269205   30930 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:27:55.269484   30930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:55.269489   30930 out.go:304] Setting ErrFile to fd 2...
	I0415 05:27:55.269493   30930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:27:55.269664   30930 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:27:55.269842   30930 out.go:298] Setting JSON to false
	I0415 05:27:55.269864   30930 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:27:55.269898   30930 notify.go:220] Checking for updates...
	I0415 05:27:55.270127   30930 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:27:55.270143   30930 status.go:255] checking status of multinode-701000 ...
	I0415 05:27:55.270527   30930 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:27:55.319482   30930 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:27:55.319549   30930 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:27:55.319567   30930 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:27:55.319588   30930 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:27:55.319595   30930 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr: exit status 7 (116.82983ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:28:05.254801   30934 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:28:05.255090   30934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:28:05.255095   30934 out.go:304] Setting ErrFile to fd 2...
	I0415 05:28:05.255099   30934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:28:05.255277   30934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:28:05.255443   30934 out.go:298] Setting JSON to false
	I0415 05:28:05.255468   30934 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:28:05.255505   30934 notify.go:220] Checking for updates...
	I0415 05:28:05.255732   30934 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:28:05.255751   30934 status.go:255] checking status of multinode-701000 ...
	I0415 05:28:05.256138   30934 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:05.304486   30934 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:05.304558   30934 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:28:05.304576   30934 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:28:05.304596   30934 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:28:05.304604   30934 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-701000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "d713e4b43e2e6b1e23f3c061f7f248be3638cac07503f6f749e2ad5a1aa6eeb4",
	        "Created": "2024-04-15T12:19:37.168154568Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (113.037189ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:28:05.470129   30940 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (787.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-701000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-701000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-701000: exit status 82 (10.511588732s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-701000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-701000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-701000 --wait=true -v=8 --alsologtostderr
E0415 05:29:24.620079   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:29:41.565631   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:29:54.136988   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:34:37.185474   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:34:41.563394   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:34:54.134626   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:39:41.561098   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:39:54.130428   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-701000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m56.35926807s)

                                                
                                                
-- stdout --
	* [multinode-701000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-701000" primary control-plane node in "multinode-701000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* docker "multinode-701000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-701000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:28:16.111245   30960 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:28:16.111442   30960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:28:16.111447   30960 out.go:304] Setting ErrFile to fd 2...
	I0415 05:28:16.111451   30960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:28:16.112126   30960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:28:16.114321   30960 out.go:298] Setting JSON to false
	I0415 05:28:16.136902   30960 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":8866,"bootTime":1713175230,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 05:28:16.136997   30960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:28:16.158891   30960 out.go:177] * [multinode-701000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 05:28:16.222460   30960 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:28:16.200889   30960 notify.go:220] Checking for updates...
	I0415 05:28:16.264695   30960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 05:28:16.286387   30960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 05:28:16.307863   30960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:28:16.330218   30960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	I0415 05:28:16.351746   30960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:28:16.374516   30960 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:28:16.374717   30960 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:28:16.429849   30960 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 05:28:16.430006   30960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 05:28:16.537780   30960 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:123 SystemTime:2024-04-15 12:28:16.527232658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 05:28:16.581396   30960 out.go:177] * Using the docker driver based on existing profile
	I0415 05:28:16.602668   30960 start.go:297] selected driver: docker
	I0415 05:28:16.602696   30960 start.go:901] validating driver "docker" against &{Name:multinode-701000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-701000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:28:16.602797   30960 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:28:16.602997   30960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 05:28:16.713592   30960 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:123 SystemTime:2024-04-15 12:28:16.703249063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 05:28:16.716672   30960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:28:16.716748   30960 cni.go:84] Creating CNI manager for ""
	I0415 05:28:16.716757   30960 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 05:28:16.716826   30960 start.go:340] cluster config:
	{Name:multinode-701000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:28:16.760419   30960 out.go:177] * Starting "multinode-701000" primary control-plane node in "multinode-701000" cluster
	I0415 05:28:16.782609   30960 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 05:28:16.804589   30960 out.go:177] * Pulling base image v0.0.43-1712854342-18621 ...
	I0415 05:28:16.846613   30960 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:28:16.846656   30960 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 05:28:16.846712   30960 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 05:28:16.846730   30960 cache.go:56] Caching tarball of preloaded images
	I0415 05:28:16.846962   30960 preload.go:173] Found /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 05:28:16.846981   30960 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:28:16.847156   30960 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/multinode-701000/config.json ...
	I0415 05:28:16.898776   30960 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon, skipping pull
	I0415 05:28:16.898798   30960 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in daemon, skipping load
	I0415 05:28:16.898821   30960 cache.go:194] Successfully downloaded all kic artifacts
	I0415 05:28:16.898858   30960 start.go:360] acquireMachinesLock for multinode-701000: {Name:mk2f276f5ed2de5433c43cfc6c1200ad22d6fb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:28:16.898961   30960 start.go:364] duration metric: took 77.21µs to acquireMachinesLock for "multinode-701000"
	I0415 05:28:16.898984   30960 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:28:16.898994   30960 fix.go:54] fixHost starting: 
	I0415 05:28:16.899234   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:16.946992   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:16.947043   30960 fix.go:112] recreateIfNeeded on multinode-701000: state= err=unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:16.947068   30960 fix.go:117] machineExists: false. err=machine does not exist
	I0415 05:28:16.969295   30960 out.go:177] * docker "multinode-701000" container is missing, will recreate.
	I0415 05:28:17.011895   30960 delete.go:124] DEMOLISHING multinode-701000 ...
	I0415 05:28:17.012067   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:17.062057   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	W0415 05:28:17.062112   30960 stop.go:83] unable to get state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:17.062131   30960 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:17.062500   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:17.111185   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:17.111234   30960 delete.go:82] Unable to get host status for multinode-701000, assuming it has already been deleted: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:17.111330   30960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-701000
	W0415 05:28:17.160059   30960 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-701000 returned with exit code 1
	I0415 05:28:17.160089   30960 kic.go:371] could not find the container multinode-701000 to remove it. will try anyways
	I0415 05:28:17.160157   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:17.207421   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	W0415 05:28:17.207473   30960 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:17.207558   30960 cli_runner.go:164] Run: docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0"
	W0415 05:28:17.255560   30960 cli_runner.go:211] docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 05:28:17.255590   30960 oci.go:650] error shutdown multinode-701000: docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:18.257973   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:18.311798   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:18.311840   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:18.311851   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:28:18.311888   30960 retry.go:31] will retry after 670.23614ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:18.984496   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:19.037252   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:19.037302   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:19.037311   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:28:19.037334   30960 retry.go:31] will retry after 721.715377ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:19.760521   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:19.813036   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:19.813081   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:19.813092   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:28:19.813115   30960 retry.go:31] will retry after 903.645672ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:20.719084   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:20.771598   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:20.771644   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:20.771670   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:28:20.771695   30960 retry.go:31] will retry after 2.027503526s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:22.800567   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:22.852639   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:22.852694   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:22.852704   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:28:22.852732   30960 retry.go:31] will retry after 1.872992088s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:24.727207   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:24.778646   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:24.778693   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:24.778704   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:28:24.778725   30960 retry.go:31] will retry after 3.513388212s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:28.294493   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:28.346645   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:28.346691   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:28.346706   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:28:28.346737   30960 retry.go:31] will retry after 3.035486199s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:31.383481   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:28:31.436671   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:28:31.436713   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:28:31.436720   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:28:31.436750   30960 oci.go:88] couldn't shut down multinode-701000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	 
	I0415 05:28:31.436835   30960 cli_runner.go:164] Run: docker rm -f -v multinode-701000
	I0415 05:28:31.487117   30960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-701000
	W0415 05:28:31.534353   30960 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-701000 returned with exit code 1
	I0415 05:28:31.534463   30960 cli_runner.go:164] Run: docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 05:28:31.582858   30960 cli_runner.go:164] Run: docker network rm multinode-701000
	I0415 05:28:31.687325   30960 fix.go:124] Sleeping 1 second for extra luck!
	I0415 05:28:32.688407   30960 start.go:125] createHost starting for "" (driver="docker")
	I0415 05:28:32.711925   30960 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 05:28:32.712100   30960 start.go:159] libmachine.API.Create for "multinode-701000" (driver="docker")
	I0415 05:28:32.712141   30960 client.go:168] LocalClient.Create starting
	I0415 05:28:32.712356   30960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 05:28:32.712455   30960 main.go:141] libmachine: Decoding PEM data...
	I0415 05:28:32.712490   30960 main.go:141] libmachine: Parsing certificate...
	I0415 05:28:32.712586   30960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 05:28:32.712667   30960 main.go:141] libmachine: Decoding PEM data...
	I0415 05:28:32.712682   30960 main.go:141] libmachine: Parsing certificate...
	I0415 05:28:32.733815   30960 cli_runner.go:164] Run: docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 05:28:32.783478   30960 cli_runner.go:211] docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 05:28:32.783565   30960 network_create.go:281] running [docker network inspect multinode-701000] to gather additional debugging logs...
	I0415 05:28:32.783590   30960 cli_runner.go:164] Run: docker network inspect multinode-701000
	W0415 05:28:32.831800   30960 cli_runner.go:211] docker network inspect multinode-701000 returned with exit code 1
	I0415 05:28:32.831830   30960 network_create.go:284] error running [docker network inspect multinode-701000]: docker network inspect multinode-701000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-701000 not found
	I0415 05:28:32.831841   30960 network_create.go:286] output of [docker network inspect multinode-701000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-701000 not found
	
	** /stderr **
	I0415 05:28:32.831969   30960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 05:28:32.882562   30960 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:28:32.883999   30960 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:28:32.884396   30960 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002560990}
	I0415 05:28:32.884415   30960 network_create.go:124] attempt to create docker network multinode-701000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 05:28:32.884489   30960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000
	I0415 05:28:32.969283   30960 network_create.go:108] docker network multinode-701000 192.168.67.0/24 created
	I0415 05:28:32.969322   30960 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-701000" container
	I0415 05:28:32.969439   30960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 05:28:33.019095   30960 cli_runner.go:164] Run: docker volume create multinode-701000 --label name.minikube.sigs.k8s.io=multinode-701000 --label created_by.minikube.sigs.k8s.io=true
	I0415 05:28:33.067866   30960 oci.go:103] Successfully created a docker volume multinode-701000
	I0415 05:28:33.067979   30960 cli_runner.go:164] Run: docker run --rm --name multinode-701000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-701000 --entrypoint /usr/bin/test -v multinode-701000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 05:28:33.320040   30960 oci.go:107] Successfully prepared a docker volume multinode-701000
	I0415 05:28:33.320079   30960 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:28:33.320090   30960 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 05:28:33.320194   30960 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-701000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 05:34:32.711067   30960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 05:34:32.711203   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:32.762572   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:32.762688   30960 retry.go:31] will retry after 342.999262ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:33.108164   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:33.158611   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:33.158724   30960 retry.go:31] will retry after 416.7686ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:33.577928   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:33.629486   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:33.629592   30960 retry.go:31] will retry after 384.881639ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:34.015703   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:34.068857   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:34.068955   30960 retry.go:31] will retry after 429.8512ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:34.501170   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:34.553700   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:34:34.553817   30960 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:34:34.553835   30960 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:34.553888   30960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 05:34:34.553965   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:34.602304   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:34.602406   30960 retry.go:31] will retry after 154.315755ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:34.757473   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:34.809971   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:34.810070   30960 retry.go:31] will retry after 500.2096ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:35.312198   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:35.364523   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:35.364619   30960 retry.go:31] will retry after 661.067958ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:36.028064   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:36.081147   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:34:36.081244   30960 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:34:36.081260   30960 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:36.081277   30960 start.go:128] duration metric: took 6m3.396316711s to createHost
	I0415 05:34:36.081348   30960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 05:34:36.081401   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:36.136903   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:36.136994   30960 retry.go:31] will retry after 335.179022ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:36.472695   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:36.524414   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:36.524510   30960 retry.go:31] will retry after 273.364582ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:36.800202   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:36.853038   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:36.853132   30960 retry.go:31] will retry after 823.928231ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:37.679344   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:37.734059   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:34:37.734155   30960 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:34:37.734167   30960 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:37.734232   30960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 05:34:37.734288   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:37.782102   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:37.782193   30960 retry.go:31] will retry after 336.320952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:38.120930   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:38.171414   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:38.171507   30960 retry.go:31] will retry after 263.93693ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:38.437819   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:38.490143   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:34:38.490246   30960 retry.go:31] will retry after 322.759067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:38.814033   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:34:38.865719   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:34:38.865814   30960 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:34:38.865833   30960 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:38.865848   30960 fix.go:56] duration metric: took 6m21.970551188s for fixHost
	I0415 05:34:38.865855   30960 start.go:83] releasing machines lock for "multinode-701000", held for 6m21.970581232s
	W0415 05:34:38.865871   30960 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 05:34:38.865949   30960 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 05:34:38.865956   30960 start.go:728] Will try again in 5 seconds ...
	I0415 05:34:43.868105   30960 start.go:360] acquireMachinesLock for multinode-701000: {Name:mk2f276f5ed2de5433c43cfc6c1200ad22d6fb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:34:43.868338   30960 start.go:364] duration metric: took 157.868µs to acquireMachinesLock for "multinode-701000"
	I0415 05:34:43.868380   30960 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:34:43.868387   30960 fix.go:54] fixHost starting: 
	I0415 05:34:43.868811   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:43.923167   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:43.923209   30960 fix.go:112] recreateIfNeeded on multinode-701000: state= err=unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:43.923228   30960 fix.go:117] machineExists: false. err=machine does not exist
	I0415 05:34:43.945064   30960 out.go:177] * docker "multinode-701000" container is missing, will recreate.
	I0415 05:34:43.986624   30960 delete.go:124] DEMOLISHING multinode-701000 ...
	I0415 05:34:43.986819   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:44.036752   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	W0415 05:34:44.036814   30960 stop.go:83] unable to get state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:44.036832   30960 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:44.037209   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:44.085500   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:44.085561   30960 delete.go:82] Unable to get host status for multinode-701000, assuming it has already been deleted: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:44.085640   30960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-701000
	W0415 05:34:44.133944   30960 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-701000 returned with exit code 1
	I0415 05:34:44.133976   30960 kic.go:371] could not find the container multinode-701000 to remove it. will try anyways
	I0415 05:34:44.134050   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:44.184848   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	W0415 05:34:44.184894   30960 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:44.184972   30960 cli_runner.go:164] Run: docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0"
	W0415 05:34:44.232104   30960 cli_runner.go:211] docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 05:34:44.232133   30960 oci.go:650] error shutdown multinode-701000: docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:45.234498   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:45.285045   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:45.285093   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:45.285105   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:34:45.285127   30960 retry.go:31] will retry after 748.856624ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:46.036313   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:46.090396   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:46.090441   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:46.090452   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:34:46.090476   30960 retry.go:31] will retry after 678.465515ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:46.770654   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:46.824634   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:46.824676   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:46.824686   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:34:46.824717   30960 retry.go:31] will retry after 1.54417676s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:48.369845   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:48.423547   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:48.423595   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:48.423604   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:34:48.423628   30960 retry.go:31] will retry after 923.184984ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:49.347541   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:49.399763   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:49.399808   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:49.399818   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:34:49.399843   30960 retry.go:31] will retry after 2.236186516s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:51.637011   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:51.690836   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:51.690884   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:51.690894   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:34:51.690919   30960 retry.go:31] will retry after 2.963800648s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:54.657044   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:54.709688   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:54.709741   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:54.709749   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:34:54.709769   30960 retry.go:31] will retry after 4.375577961s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:59.087659   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:34:59.140520   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:34:59.140564   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:34:59.140572   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:34:59.140599   30960 retry.go:31] will retry after 5.620932932s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:35:04.763853   30960 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:35:04.815270   30960 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:35:04.815312   30960 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:35:04.815323   30960 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:35:04.815356   30960 oci.go:88] couldn't shut down multinode-701000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	 
	I0415 05:35:04.815426   30960 cli_runner.go:164] Run: docker rm -f -v multinode-701000
	I0415 05:35:04.865156   30960 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-701000
	W0415 05:35:04.913299   30960 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-701000 returned with exit code 1
	I0415 05:35:04.913401   30960 cli_runner.go:164] Run: docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 05:35:04.962386   30960 cli_runner.go:164] Run: docker network rm multinode-701000
	I0415 05:35:05.069203   30960 fix.go:124] Sleeping 1 second for extra luck!
	I0415 05:35:06.071389   30960 start.go:125] createHost starting for "" (driver="docker")
	I0415 05:35:06.093795   30960 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 05:35:06.093945   30960 start.go:159] libmachine.API.Create for "multinode-701000" (driver="docker")
	I0415 05:35:06.093973   30960 client.go:168] LocalClient.Create starting
	I0415 05:35:06.094199   30960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 05:35:06.094309   30960 main.go:141] libmachine: Decoding PEM data...
	I0415 05:35:06.094336   30960 main.go:141] libmachine: Parsing certificate...
	I0415 05:35:06.094416   30960 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 05:35:06.094495   30960 main.go:141] libmachine: Decoding PEM data...
	I0415 05:35:06.094509   30960 main.go:141] libmachine: Parsing certificate...
	I0415 05:35:06.116219   30960 cli_runner.go:164] Run: docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 05:35:06.188206   30960 cli_runner.go:211] docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 05:35:06.188290   30960 network_create.go:281] running [docker network inspect multinode-701000] to gather additional debugging logs...
	I0415 05:35:06.188305   30960 cli_runner.go:164] Run: docker network inspect multinode-701000
	W0415 05:35:06.237311   30960 cli_runner.go:211] docker network inspect multinode-701000 returned with exit code 1
	I0415 05:35:06.237338   30960 network_create.go:284] error running [docker network inspect multinode-701000]: docker network inspect multinode-701000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-701000 not found
	I0415 05:35:06.237354   30960 network_create.go:286] output of [docker network inspect multinode-701000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-701000 not found
	
	** /stderr **
	I0415 05:35:06.237509   30960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 05:35:06.287818   30960 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:35:06.289492   30960 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:35:06.290894   30960 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:35:06.291332   30960 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002295ac0}
	I0415 05:35:06.291347   30960 network_create.go:124] attempt to create docker network multinode-701000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0415 05:35:06.291440   30960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000
	W0415 05:35:06.340343   30960 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000 returned with exit code 1
	W0415 05:35:06.340377   30960 network_create.go:149] failed to create docker network multinode-701000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0415 05:35:06.340396   30960 network_create.go:116] failed to create docker network multinode-701000 192.168.76.0/24, will retry: subnet is taken
	I0415 05:35:06.342013   30960 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:35:06.342387   30960 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020e16a0}
	I0415 05:35:06.342399   30960 network_create.go:124] attempt to create docker network multinode-701000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0415 05:35:06.342470   30960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000
	I0415 05:35:06.427414   30960 network_create.go:108] docker network multinode-701000 192.168.85.0/24 created
	I0415 05:35:06.427444   30960 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-701000" container
	I0415 05:35:06.427548   30960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 05:35:06.477204   30960 cli_runner.go:164] Run: docker volume create multinode-701000 --label name.minikube.sigs.k8s.io=multinode-701000 --label created_by.minikube.sigs.k8s.io=true
	I0415 05:35:06.525315   30960 oci.go:103] Successfully created a docker volume multinode-701000
	I0415 05:35:06.525438   30960 cli_runner.go:164] Run: docker run --rm --name multinode-701000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-701000 --entrypoint /usr/bin/test -v multinode-701000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 05:35:06.763186   30960 oci.go:107] Successfully prepared a docker volume multinode-701000
	I0415 05:35:06.763223   30960 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:35:06.763246   30960 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 05:35:06.763336   30960 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-701000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 05:41:06.092659   30960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 05:41:06.092792   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:06.145797   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:06.145911   30960 retry.go:31] will retry after 204.455981ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:06.351503   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:06.403630   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:06.403729   30960 retry.go:31] will retry after 399.187581ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:06.805319   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:06.855254   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:06.855366   30960 retry.go:31] will retry after 332.65096ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:07.188631   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:07.239416   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:41:07.239537   30960 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:41:07.239553   30960 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:07.239611   30960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 05:41:07.239672   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:07.287727   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:07.287816   30960 retry.go:31] will retry after 374.255777ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:07.664405   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:07.715460   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:07.715551   30960 retry.go:31] will retry after 328.431654ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:08.045727   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:08.098196   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:08.098306   30960 retry.go:31] will retry after 384.11108ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:08.484850   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:08.543279   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:41:08.543427   30960 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:41:08.543455   30960 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:08.543475   30960 start.go:128] duration metric: took 6m2.475539643s to createHost
	I0415 05:41:08.543544   30960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 05:41:08.543617   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:08.595442   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:08.595536   30960 retry.go:31] will retry after 220.140993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:08.815902   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:08.869609   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:08.869713   30960 retry.go:31] will retry after 236.654555ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:09.107208   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:09.162358   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:09.162461   30960 retry.go:31] will retry after 508.181804ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:09.673068   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:09.727404   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:09.727513   30960 retry.go:31] will retry after 555.487471ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:10.285377   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:10.337267   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:41:10.337359   30960 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:41:10.337381   30960 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:10.337437   30960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 05:41:10.337494   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:10.385097   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:10.385187   30960 retry.go:31] will retry after 231.001236ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:10.618551   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:10.669389   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:10.669483   30960 retry.go:31] will retry after 238.700358ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:10.910471   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:10.961106   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:10.961199   30960 retry.go:31] will retry after 522.921233ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:11.485567   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:11.537379   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	I0415 05:41:11.537476   30960 retry.go:31] will retry after 662.309487ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:12.200229   30960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000
	W0415 05:41:12.250216   30960 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000 returned with exit code 1
	W0415 05:41:12.250318   30960 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	W0415 05:41:12.250341   30960 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-701000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:12.250353   30960 fix.go:56] duration metric: took 6m28.385723328s for fixHost
	I0415 05:41:12.250359   30960 start.go:83] releasing machines lock for "multinode-701000", held for 6m28.385765685s
	W0415 05:41:12.250436   30960 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-701000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-701000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0415 05:41:12.293923   30960 out.go:177] 
	W0415 05:41:12.314911   30960 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0415 05:41:12.314955   30960 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0415 05:41:12.314986   30960 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0415 05:41:12.336823   30960 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-701000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-701000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "8b568cd7d270b481aad65b4e2f1c55aa886abf0426f2e9a486ad82e3c7f35f9f",
	        "Created": "2024-04-15T12:35:06.388334129Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (113.277073ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:41:12.644260   31360 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (787.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 node delete m03: exit status 80 (201.821194ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-701000 host status: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-701000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr: exit status 7 (113.20047ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:41:12.908887   31368 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:41:12.909189   31368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:41:12.909195   31368 out.go:304] Setting ErrFile to fd 2...
	I0415 05:41:12.909199   31368 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:41:12.909397   31368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:41:12.909571   31368 out.go:298] Setting JSON to false
	I0415 05:41:12.909602   31368 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:41:12.909631   31368 notify.go:220] Checking for updates...
	I0415 05:41:12.909866   31368 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:41:12.909880   31368 status.go:255] checking status of multinode-701000 ...
	I0415 05:41:12.910278   31368 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:12.959384   31368 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:12.959454   31368 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:41:12.959473   31368 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:41:12.959493   31368 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:41:12.959501   31368 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "8b568cd7d270b481aad65b4e2f1c55aa886abf0426f2e9a486ad82e3c7f35f9f",
	        "Created": "2024-04-15T12:35:06.388334129Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (112.925196ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:41:13.124507   31374 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (13.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 stop: exit status 82 (12.863811104s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	* Stopping node "multinode-701000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-701000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-701000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status: exit status 7 (113.976493ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:41:26.102523   31403 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:41:26.102534   31403 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr: exit status 7 (113.211973ms)

                                                
                                                
-- stdout --
	multinode-701000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:41:26.165704   31407 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:41:26.165987   31407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:41:26.165993   31407 out.go:304] Setting ErrFile to fd 2...
	I0415 05:41:26.165996   31407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:41:26.166176   31407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:41:26.166362   31407 out.go:298] Setting JSON to false
	I0415 05:41:26.166385   31407 mustload.go:65] Loading cluster: multinode-701000
	I0415 05:41:26.166436   31407 notify.go:220] Checking for updates...
	I0415 05:41:26.166690   31407 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:41:26.166708   31407 status.go:255] checking status of multinode-701000 ...
	I0415 05:41:26.167095   31407 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:26.215702   31407 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:26.215756   31407 status.go:330] multinode-701000 host status = "" (err=state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	)
	I0415 05:41:26.215781   31407 status.go:257] multinode-701000 status: &{Name:multinode-701000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0415 05:41:26.215801   31407 status.go:260] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	E0415 05:41:26.215808   31407 status.go:263] The "multinode-701000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr": multinode-701000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-701000 status --alsologtostderr": multinode-701000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "8b568cd7d270b481aad65b4e2f1c55aa886abf0426f2e9a486ad82e3c7f35f9f",
	        "Created": "2024-04-15T12:35:06.388334129Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (112.895209ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:41:26.380436   31413 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (13.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (100.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-701000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-701000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m40.444652165s)

                                                
                                                
-- stdout --
	* [multinode-701000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-701000" primary control-plane node in "multinode-701000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* docker "multinode-701000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 05:41:26.444213   31417 out.go:291] Setting OutFile to fd 1 ...
	I0415 05:41:26.444515   31417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:41:26.444521   31417 out.go:304] Setting ErrFile to fd 2...
	I0415 05:41:26.444524   31417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 05:41:26.444724   31417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 05:41:26.446191   31417 out.go:298] Setting JSON to false
	I0415 05:41:26.468578   31417 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9656,"bootTime":1713175230,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 05:41:26.468667   31417 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 05:41:26.490228   31417 out.go:177] * [multinode-701000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 05:41:26.532262   31417 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 05:41:26.532301   31417 notify.go:220] Checking for updates...
	I0415 05:41:26.574991   31417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 05:41:26.596131   31417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 05:41:26.617942   31417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 05:41:26.639169   31417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	I0415 05:41:26.661056   31417 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 05:41:26.682553   31417 config.go:182] Loaded profile config "multinode-701000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 05:41:26.683366   31417 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 05:41:26.738452   31417 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 05:41:26.738629   31417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 05:41:26.845088   31417 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:87 OomKillDisable:false NGoroutines:143 SystemTime:2024-04-15 12:41:26.834476775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 05:41:26.887625   31417 out.go:177] * Using the docker driver based on existing profile
	I0415 05:41:26.908603   31417 start.go:297] selected driver: docker
	I0415 05:41:26.908631   31417 start.go:901] validating driver "docker" against &{Name:multinode-701000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-701000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:41:26.908766   31417 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 05:41:26.908980   31417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 05:41:27.015189   31417 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:87 OomKillDisable:false NGoroutines:143 SystemTime:2024-04-15 12:41:27.005259274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 05:41:27.018237   31417 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 05:41:27.018312   31417 cni.go:84] Creating CNI manager for ""
	I0415 05:41:27.018321   31417 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 05:41:27.018389   31417 start.go:340] cluster config:
	{Name:multinode-701000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 05:41:27.061035   31417 out.go:177] * Starting "multinode-701000" primary control-plane node in "multinode-701000" cluster
	I0415 05:41:27.081867   31417 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 05:41:27.102911   31417 out.go:177] * Pulling base image v0.0.43-1712854342-18621 ...
	I0415 05:41:27.144881   31417 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:41:27.144962   31417 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 05:41:27.144940   31417 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 05:41:27.144980   31417 cache.go:56] Caching tarball of preloaded images
	I0415 05:41:27.145207   31417 preload.go:173] Found /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0415 05:41:27.145227   31417 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 05:41:27.145358   31417 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/multinode-701000/config.json ...
	I0415 05:41:27.197780   31417 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon, skipping pull
	I0415 05:41:27.197817   31417 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in daemon, skipping load
	I0415 05:41:27.197853   31417 cache.go:194] Successfully downloaded all kic artifacts
	I0415 05:41:27.197896   31417 start.go:360] acquireMachinesLock for multinode-701000: {Name:mk2f276f5ed2de5433c43cfc6c1200ad22d6fb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 05:41:27.197991   31417 start.go:364] duration metric: took 77.364µs to acquireMachinesLock for "multinode-701000"
	I0415 05:41:27.198014   31417 start.go:96] Skipping create...Using existing machine configuration
	I0415 05:41:27.198026   31417 fix.go:54] fixHost starting: 
	I0415 05:41:27.198266   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:27.247334   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:27.247398   31417 fix.go:112] recreateIfNeeded on multinode-701000: state= err=unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:27.247416   31417 fix.go:117] machineExists: false. err=machine does not exist
	I0415 05:41:27.269166   31417 out.go:177] * docker "multinode-701000" container is missing, will recreate.
	I0415 05:41:27.310791   31417 delete.go:124] DEMOLISHING multinode-701000 ...
	I0415 05:41:27.310963   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:27.360419   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	W0415 05:41:27.360490   31417 stop.go:83] unable to get state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:27.360506   31417 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:27.360871   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:27.409265   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:27.409314   31417 delete.go:82] Unable to get host status for multinode-701000, assuming it has already been deleted: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:27.409390   31417 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-701000
	W0415 05:41:27.457677   31417 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-701000 returned with exit code 1
	I0415 05:41:27.457710   31417 kic.go:371] could not find the container multinode-701000 to remove it. will try anyways
	I0415 05:41:27.457793   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:27.505779   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	W0415 05:41:27.505832   31417 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:27.505915   31417 cli_runner.go:164] Run: docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0"
	W0415 05:41:27.554692   31417 cli_runner.go:211] docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0415 05:41:27.554721   31417 oci.go:650] error shutdown multinode-701000: docker exec --privileged -t multinode-701000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:28.555287   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:28.607544   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:28.607601   31417 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:28.607612   31417 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:41:28.607645   31417 retry.go:31] will retry after 736.578487ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:29.346558   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:29.399575   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:29.399622   31417 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:29.399632   31417 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:41:29.399654   31417 retry.go:31] will retry after 593.056ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:29.994537   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:30.047009   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:30.047076   31417 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:30.047085   31417 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:41:30.047106   31417 retry.go:31] will retry after 1.013515733s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:31.062379   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:31.115244   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:31.115296   31417 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:31.115308   31417 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:41:31.115332   31417 retry.go:31] will retry after 971.322587ms: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:32.087277   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:32.139899   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:32.139944   31417 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:32.139954   31417 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:41:32.139980   31417 retry.go:31] will retry after 3.08572268s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:35.227640   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:35.279333   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:35.279376   31417 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:35.279387   31417 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:41:35.279410   31417 retry.go:31] will retry after 3.340147068s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:38.620392   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:38.673072   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:38.673122   31417 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:38.673135   31417 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:41:38.673158   31417 retry.go:31] will retry after 3.88183042s: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:42.557331   31417 cli_runner.go:164] Run: docker container inspect multinode-701000 --format={{.State.Status}}
	W0415 05:41:42.611371   31417 cli_runner.go:211] docker container inspect multinode-701000 --format={{.State.Status}} returned with exit code 1
	I0415 05:41:42.611416   31417 oci.go:662] temporary error verifying shutdown: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	I0415 05:41:42.611427   31417 oci.go:664] temporary error: container multinode-701000 status is  but expect it to be exited
	I0415 05:41:42.611456   31417 oci.go:88] couldn't shut down multinode-701000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000
	 
	I0415 05:41:42.611532   31417 cli_runner.go:164] Run: docker rm -f -v multinode-701000
	I0415 05:41:42.661760   31417 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-701000
	W0415 05:41:42.709760   31417 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-701000 returned with exit code 1
	I0415 05:41:42.709885   31417 cli_runner.go:164] Run: docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 05:41:42.759114   31417 cli_runner.go:164] Run: docker network rm multinode-701000
	I0415 05:41:42.865872   31417 fix.go:124] Sleeping 1 second for extra luck!
	I0415 05:41:43.866309   31417 start.go:125] createHost starting for "" (driver="docker")
	I0415 05:41:43.889768   31417 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0415 05:41:43.889961   31417 start.go:159] libmachine.API.Create for "multinode-701000" (driver="docker")
	I0415 05:41:43.889997   31417 client.go:168] LocalClient.Create starting
	I0415 05:41:43.890237   31417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/ca.pem
	I0415 05:41:43.890350   31417 main.go:141] libmachine: Decoding PEM data...
	I0415 05:41:43.890385   31417 main.go:141] libmachine: Parsing certificate...
	I0415 05:41:43.890489   31417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18644-22866/.minikube/certs/cert.pem
	I0415 05:41:43.890572   31417 main.go:141] libmachine: Decoding PEM data...
	I0415 05:41:43.890587   31417 main.go:141] libmachine: Parsing certificate...
	I0415 05:41:43.911501   31417 cli_runner.go:164] Run: docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 05:41:43.962042   31417 cli_runner.go:211] docker network inspect multinode-701000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 05:41:43.962130   31417 network_create.go:281] running [docker network inspect multinode-701000] to gather additional debugging logs...
	I0415 05:41:43.962148   31417 cli_runner.go:164] Run: docker network inspect multinode-701000
	W0415 05:41:44.010697   31417 cli_runner.go:211] docker network inspect multinode-701000 returned with exit code 1
	I0415 05:41:44.010729   31417 network_create.go:284] error running [docker network inspect multinode-701000]: docker network inspect multinode-701000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-701000 not found
	I0415 05:41:44.010741   31417 network_create.go:286] output of [docker network inspect multinode-701000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-701000 not found
	
	** /stderr **
	I0415 05:41:44.010855   31417 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 05:41:44.060576   31417 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:41:44.061973   31417 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0415 05:41:44.062323   31417 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00224fbe0}
	I0415 05:41:44.062338   31417 network_create.go:124] attempt to create docker network multinode-701000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0415 05:41:44.062403   31417 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701000 multinode-701000
	I0415 05:41:44.146990   31417 network_create.go:108] docker network multinode-701000 192.168.67.0/24 created
	I0415 05:41:44.147032   31417 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-701000" container
	I0415 05:41:44.147148   31417 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 05:41:44.196248   31417 cli_runner.go:164] Run: docker volume create multinode-701000 --label name.minikube.sigs.k8s.io=multinode-701000 --label created_by.minikube.sigs.k8s.io=true
	I0415 05:41:44.244594   31417 oci.go:103] Successfully created a docker volume multinode-701000
	I0415 05:41:44.244712   31417 cli_runner.go:164] Run: docker run --rm --name multinode-701000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-701000 --entrypoint /usr/bin/test -v multinode-701000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -d /var/lib
	I0415 05:41:44.486885   31417 oci.go:107] Successfully prepared a docker volume multinode-701000
	I0415 05:41:44.486922   31417 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 05:41:44.486934   31417 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 05:41:44.487036   31417 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-701000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-701000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701000
helpers_test.go:235: (dbg) docker inspect multinode-701000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-701000",
	        "Id": "7400086db9b359541207aa88848d73caacdf320fd86417903da279d40ae003ba",
	        "Created": "2024-04-15T12:41:44.107364153Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-701000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-701000 -n multinode-701000: exit status 7 (116.243385ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:43:07.068066   31551 status.go:249] status error: host: state: unknown state "multinode-701000": docker container inspect multinode-701000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-701000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-701000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (100.62s)

                                                
                                    
x
+
TestScheduledStopUnix (300.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-295000 --memory=2048 --driver=docker 
E0415 05:46:04.678919   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:49:41.622201   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:49:54.192746   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-295000 --memory=2048 --driver=docker : signal: killed (5m0.004744677s)

                                                
                                                
-- stdout --
	* [scheduled-stop-295000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-295000" primary control-plane node in "scheduled-stop-295000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-295000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-295000" primary control-plane node in "scheduled-stop-295000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-15 05:50:19.37296 -0700 PDT m=+4886.493703886
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-295000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-295000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-295000",
	        "Id": "b56323b4ad8ea472529e9f2adc7aa12e043f0f348f8481723e51fae241df55e5",
	        "Created": "2024-04-15T12:45:20.483759476Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-295000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-295000 -n scheduled-stop-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-295000 -n scheduled-stop-295000: exit status 7 (112.80237ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:50:19.538098   32230 status.go:249] status error: host: state: unknown state "scheduled-stop-295000": docker container inspect scheduled-stop-295000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-295000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-295000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-295000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-295000
--- FAIL: TestScheduledStopUnix (300.92s)

                                                
                                    
x
+
TestSkaffold (300.9s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe3224689667 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe3224689667 version: (1.417751625s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-058000 --memory=2600 --driver=docker 
E0415 05:51:17.244652   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 05:54:41.617938   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:54:54.188484   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-058000 --memory=2600 --driver=docker : signal: killed (4m57.330936749s)

                                                
                                                
-- stdout --
	* [skaffold-058000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-058000" primary control-plane node in "skaffold-058000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-058000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-058000" primary control-plane node in "skaffold-058000" cluster
	* Pulling base image v0.0.43-1712854342-18621 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-04-15 05:55:20.297489 -0700 PDT m=+5187.422502960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-058000
helpers_test.go:235: (dbg) docker inspect skaffold-058000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-058000",
	        "Id": "adc601fd125a3e0d4f9f101088c7f44618a73053904f6e3090388935a39e6b5f",
	        "Created": "2024-04-15T12:50:24.040141999Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-058000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-058000 -n skaffold-058000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-058000 -n skaffold-058000: exit status 7 (115.334489ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0415 05:55:20.468357   32491 status.go:249] status error: host: state: unknown state "skaffold-058000": docker container inspect skaffold-058000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-058000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-058000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-058000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-058000
--- FAIL: TestSkaffold (300.90s)

                                                
                                    
x
+
TestInsufficientStorage (300.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-457000 --memory=2048 --output=json --wait=true --driver=docker 
E0415 05:59:41.579440   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 05:59:54.149646   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-457000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.005434654s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0ecf0423-6df9-4c69-8d92-0b78d25152cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-457000] minikube v1.33.0-beta.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc87dc63-3741-493c-8649-91bf2dfa9929","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18644"}}
	{"specversion":"1.0","id":"a13c1c4d-00fd-4549-9b87-b85351ceca1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig"}}
	{"specversion":"1.0","id":"89968966-0335-4378-883e-180c828929d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a44a3bff-eca7-4e58-88cb-b9ed659f7ecd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"117c62d2-49d1-4f5e-8e03-9b6d1816ac1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube"}}
	{"specversion":"1.0","id":"5ea90b35-e5fd-4617-8d74-f1fab75fee71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b84d3c46-9f1e-492c-b8cf-84fb4ab0a681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"30ddfd56-302b-4c48-a9fc-83e92e970eba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c8db1e7e-599c-4e71-9cd1-4601d8799487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7aa00686-f90e-4ee9-a1c8-5090c6c4a062","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"ab528c68-7471-4edc-938c-64756557ad8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-457000\" primary control-plane node in \"insufficient-storage-457000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"49ec2252-fc08-48a0-934f-252eba1a49b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1712854342-18621 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"feb12c72-bf94-4ce2-a441-d9e80e534579","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-457000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-457000 --output=json --layout=cluster: context deadline exceeded (915ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-457000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-457000
--- FAIL: TestInsufficientStorage (300.74s)

                                                
                                    

Test pass (172/213)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.64
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.63
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.29.3/json-events 16.65
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.31
18 TestDownloadOnly/v1.29.3/DeleteAll 0.64
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.37
21 TestDownloadOnly/v1.30.0-rc.2/json-events 16.19
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.3
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.63
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.37
29 TestDownloadOnlyKic 1.88
30 TestBinaryMirror 1.66
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
36 TestAddons/Setup 290.43
40 TestAddons/parallel/InspektorGadget 10.84
41 TestAddons/parallel/MetricsServer 5.86
42 TestAddons/parallel/HelmTiller 10.9
44 TestAddons/parallel/CSI 41.9
45 TestAddons/parallel/Headlamp 12.24
46 TestAddons/parallel/CloudSpanner 6.69
47 TestAddons/parallel/LocalPath 57.03
48 TestAddons/parallel/NvidiaDevicePlugin 5.65
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.1
53 TestAddons/StoppedEnableDisable 11.68
61 TestHyperKitDriverInstallOrUpdate 6.6
64 TestErrorSpam/setup 19.37
65 TestErrorSpam/start 2.37
66 TestErrorSpam/status 1.23
67 TestErrorSpam/pause 1.65
68 TestErrorSpam/unpause 1.83
69 TestErrorSpam/stop 11.41
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 37.66
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 34.87
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
81 TestFunctional/serial/CacheCmd/cache/add_local 2.12
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
83 TestFunctional/serial/CacheCmd/cache/list 0.09
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.44
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
86 TestFunctional/serial/CacheCmd/cache/delete 0.18
87 TestFunctional/serial/MinikubeKubectlCmd 1
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.32
89 TestFunctional/serial/ExtraConfig 40.53
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3.06
92 TestFunctional/serial/LogsFileCmd 3.18
93 TestFunctional/serial/InvalidService 4.46
95 TestFunctional/parallel/ConfigCmd 0.65
96 TestFunctional/parallel/DashboardCmd 15.37
97 TestFunctional/parallel/DryRun 1.73
98 TestFunctional/parallel/InternationalLanguage 0.84
99 TestFunctional/parallel/StatusCmd 1.21
104 TestFunctional/parallel/AddonsCmd 0.27
105 TestFunctional/parallel/PersistentVolumeClaim 27.65
107 TestFunctional/parallel/SSHCmd 0.85
108 TestFunctional/parallel/CpCmd 2.95
109 TestFunctional/parallel/MySQL 29.42
110 TestFunctional/parallel/FileSync 0.43
111 TestFunctional/parallel/CertSync 2.61
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
119 TestFunctional/parallel/License 0.54
120 TestFunctional/parallel/Version/short 0.11
121 TestFunctional/parallel/Version/components 1.06
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
126 TestFunctional/parallel/ImageCommands/ImageBuild 2.96
127 TestFunctional/parallel/ImageCommands/Setup 2.23
128 TestFunctional/parallel/DockerEnv/bash 2.02
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.08
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.46
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.13
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.65
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.89
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.3
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.59
139 TestFunctional/parallel/ServiceCmd/DeployApp 14.17
140 TestFunctional/parallel/ServiceCmd/List 0.45
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
142 TestFunctional/parallel/ServiceCmd/HTTPS 15
144 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.15
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
154 TestFunctional/parallel/ServiceCmd/Format 15
155 TestFunctional/parallel/ServiceCmd/URL 15
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
157 TestFunctional/parallel/ProfileCmd/profile_list 0.54
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
159 TestFunctional/parallel/MountCmd/any-port 8.19
160 TestFunctional/parallel/MountCmd/specific-port 2.52
161 TestFunctional/parallel/MountCmd/VerifyCleanup 2.98
162 TestFunctional/delete_addon-resizer_images 0.13
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestMultiControlPlane/serial/StartCluster 104.41
169 TestMultiControlPlane/serial/DeployApp 5.63
170 TestMultiControlPlane/serial/PingHostFromPods 1.43
171 TestMultiControlPlane/serial/AddWorkerNode 19.63
172 TestMultiControlPlane/serial/NodeLabels 0.06
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.14
174 TestMultiControlPlane/serial/CopyFile 25.46
175 TestMultiControlPlane/serial/StopSecondaryNode 11.94
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.92
177 TestMultiControlPlane/serial/RestartSecondaryNode 67.88
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.13
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 210.64
180 TestMultiControlPlane/serial/DeleteSecondaryNode 12.15
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
182 TestMultiControlPlane/serial/StopCluster 33.01
183 TestMultiControlPlane/serial/RestartCluster 99.18
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
185 TestMultiControlPlane/serial/AddSecondaryNode 39.85
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.28
189 TestImageBuild/serial/Setup 21.34
190 TestImageBuild/serial/NormalBuild 1.91
191 TestImageBuild/serial/BuildWithBuildArg 1.1
192 TestImageBuild/serial/BuildWithDockerIgnore 0.84
193 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.85
197 TestJSONOutput/start/Command 74.85
198 TestJSONOutput/start/Audit 0
200 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/pause/Command 0.56
204 TestJSONOutput/pause/Audit 0
206 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/unpause/Command 0.59
210 TestJSONOutput/unpause/Audit 0
212 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
215 TestJSONOutput/stop/Command 5.73
216 TestJSONOutput/stop/Audit 0
218 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
219 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
220 TestErrorJSONOutput 0.77
222 TestKicCustomNetwork/create_custom_network 23.14
223 TestKicCustomNetwork/use_default_bridge_network 23.19
224 TestKicExistingNetwork 23.13
225 TestKicCustomSubnet 23.44
226 TestKicStaticIP 22.46
227 TestMainNoArgs 0.09
228 TestMinikubeProfile 48.48
231 TestMountStart/serial/StartWithMountFirst 7.38
232 TestMountStart/serial/VerifyMountFirst 0.39
233 TestMountStart/serial/StartWithMountSecond 7.41
234 TestMountStart/serial/VerifyMountSecond 0.39
235 TestMountStart/serial/DeleteFirst 2.12
255 TestPreload 131.42
276 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.7
277 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.02
x
+
TestDownloadOnly/v1.20.0/json-events (16.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-693000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-693000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (16.63819196s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-693000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-693000: exit status 85 (300.253641ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-693000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:28 PDT |          |
	|         | -p download-only-693000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 04:28:52
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 04:28:52.736940   23320 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:28:52.737215   23320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:28:52.737220   23320 out.go:304] Setting ErrFile to fd 2...
	I0415 04:28:52.737224   23320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:28:52.737394   23320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	W0415 04:28:52.737492   23320 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18644-22866/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18644-22866/.minikube/config/config.json: no such file or directory
	I0415 04:28:52.739238   23320 out.go:298] Setting JSON to true
	I0415 04:28:52.761906   23320 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5303,"bootTime":1713175229,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 04:28:52.762003   23320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:28:52.783521   23320 out.go:97] [download-only-693000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 04:28:52.783701   23320 notify.go:220] Checking for updates...
	I0415 04:28:52.806569   23320 out.go:169] MINIKUBE_LOCATION=18644
	W0415 04:28:52.783754   23320 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball: no such file or directory
	I0415 04:28:52.850420   23320 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 04:28:52.871609   23320 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 04:28:52.893378   23320 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:28:52.914489   23320 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	W0415 04:28:52.957281   23320 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 04:28:52.957808   23320 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:28:53.014152   23320 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 04:28:53.014327   23320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 04:28:53.121922   23320 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-15 11:28:53.112171921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 04:28:53.143875   23320 out.go:97] Using the docker driver based on user configuration
	I0415 04:28:53.143926   23320 start.go:297] selected driver: docker
	I0415 04:28:53.143939   23320 start.go:901] validating driver "docker" against <nil>
	I0415 04:28:53.144153   23320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 04:28:53.255552   23320 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-15 11:28:53.245831432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 04:28:53.255751   23320 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 04:28:53.258971   23320 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0415 04:28:53.259116   23320 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 04:28:53.280562   23320 out.go:169] Using Docker Desktop driver with root privileges
	I0415 04:28:53.302409   23320 cni.go:84] Creating CNI manager for ""
	I0415 04:28:53.302461   23320 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 04:28:53.302649   23320 start.go:340] cluster config:
	{Name:download-only-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:28:53.324388   23320 out.go:97] Starting "download-only-693000" primary control-plane node in "download-only-693000" cluster
	I0415 04:28:53.324451   23320 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 04:28:53.346275   23320 out.go:97] Pulling base image v0.0.43-1712854342-18621 ...
	I0415 04:28:53.346374   23320 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 04:28:53.346454   23320 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 04:28:53.396864   23320 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f to local cache
	I0415 04:28:53.397116   23320 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory
	I0415 04:28:53.397255   23320 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f to local cache
	I0415 04:28:53.409263   23320 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 04:28:53.409293   23320 cache.go:56] Caching tarball of preloaded images
	I0415 04:28:53.409572   23320 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 04:28:53.431219   23320 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 04:28:53.431247   23320 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 04:28:53.516283   23320 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0415 04:28:58.244497   23320 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 04:28:58.244715   23320 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0415 04:28:58.796370   23320 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 04:28:58.796666   23320 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/download-only-693000/config.json ...
	I0415 04:28:58.796689   23320 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/download-only-693000/config.json: {Name:mk01f4082d4583dcb1a06193bb5a15ca8aca9c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 04:28:58.797745   23320 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 04:28:58.798175   23320 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	I0415 04:29:03.107535   23320 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f as a tarball
	
	
	* The control-plane node download-only-693000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-693000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-693000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (16.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-983000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-983000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker : (16.654017344s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (16.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-983000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-983000: exit status 85 (308.888214ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-693000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:28 PDT |                     |
	|         | -p download-only-693000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:29 PDT | 15 Apr 24 04:29 PDT |
	| delete  | -p download-only-693000        | download-only-693000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:29 PDT | 15 Apr 24 04:29 PDT |
	| start   | -o=json --download-only        | download-only-983000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:29 PDT |                     |
	|         | -p download-only-983000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 04:29:10
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 04:29:10.681675   23388 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:29:10.681851   23388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:29:10.681857   23388 out.go:304] Setting ErrFile to fd 2...
	I0415 04:29:10.681861   23388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:29:10.682640   23388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 04:29:10.684493   23388 out.go:298] Setting JSON to true
	I0415 04:29:10.706479   23388 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5321,"bootTime":1713175229,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 04:29:10.706572   23388 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:29:10.727956   23388 out.go:97] [download-only-983000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 04:29:10.749511   23388 out.go:169] MINIKUBE_LOCATION=18644
	I0415 04:29:10.728177   23388 notify.go:220] Checking for updates...
	I0415 04:29:10.792574   23388 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 04:29:10.813459   23388 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 04:29:10.834705   23388 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:29:10.855648   23388 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	W0415 04:29:10.897548   23388 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 04:29:10.898095   23388 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:29:10.953320   23388 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 04:29:10.953478   23388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 04:29:11.065609   23388 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-15 11:29:11.054383827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 04:29:11.087526   23388 out.go:97] Using the docker driver based on user configuration
	I0415 04:29:11.087593   23388 start.go:297] selected driver: docker
	I0415 04:29:11.087607   23388 start.go:901] validating driver "docker" against <nil>
	I0415 04:29:11.087804   23388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 04:29:11.198417   23388 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-15 11:29:11.188592848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 04:29:11.198607   23388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 04:29:11.201526   23388 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0415 04:29:11.201671   23388 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 04:29:11.223131   23388 out.go:169] Using Docker Desktop driver with root privileges
	I0415 04:29:11.244213   23388 cni.go:84] Creating CNI manager for ""
	I0415 04:29:11.244257   23388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 04:29:11.244276   23388 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 04:29:11.244413   23388 start.go:340] cluster config:
	{Name:download-only-983000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-983000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:29:11.266188   23388 out.go:97] Starting "download-only-983000" primary control-plane node in "download-only-983000" cluster
	I0415 04:29:11.266262   23388 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 04:29:11.286926   23388 out.go:97] Pulling base image v0.0.43-1712854342-18621 ...
	I0415 04:29:11.287100   23388 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 04:29:11.287055   23388 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:29:11.336612   23388 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f to local cache
	I0415 04:29:11.336791   23388 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory
	I0415 04:29:11.336809   23388 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory, skipping pull
	I0415 04:29:11.336815   23388 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in cache, skipping pull
	I0415 04:29:11.336822   23388 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f as a tarball
	I0415 04:29:11.343527   23388 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 04:29:11.343563   23388 cache.go:56] Caching tarball of preloaded images
	I0415 04:29:11.343854   23388 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:29:11.365755   23388 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 04:29:11.365800   23388 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 04:29:11.445127   23388 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2fedab548578a1509c0f422889c3109c -> /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0415 04:29:13.941856   23388 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 04:29:13.942113   23388 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0415 04:29:14.439114   23388 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 04:29:14.439377   23388 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/download-only-983000/config.json ...
	I0415 04:29:14.439399   23388 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/download-only-983000/config.json: {Name:mk6ee6d8ec22664a136f553e6c0b348df2c385eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 04:29:14.439729   23388 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 04:29:14.440803   23388 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/darwin/amd64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-983000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-983000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-983000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (16.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-076000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-076000 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker : (16.184979295s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (16.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-076000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-076000: exit status 85 (302.128212ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-693000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:28 PDT |                     |
	|         | -p download-only-693000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:29 PDT | 15 Apr 24 04:29 PDT |
	| delete  | -p download-only-693000           | download-only-693000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:29 PDT | 15 Apr 24 04:29 PDT |
	| start   | -o=json --download-only           | download-only-983000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:29 PDT |                     |
	|         | -p download-only-983000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:29 PDT | 15 Apr 24 04:29 PDT |
	| delete  | -p download-only-983000           | download-only-983000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:29 PDT | 15 Apr 24 04:29 PDT |
	| start   | -o=json --download-only           | download-only-076000 | jenkins | v1.33.0-beta.0 | 15 Apr 24 04:29 PDT |                     |
	|         | -p download-only-076000           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 04:29:28
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 04:29:28.654545   23455 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:29:28.654893   23455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:29:28.654899   23455 out.go:304] Setting ErrFile to fd 2...
	I0415 04:29:28.654902   23455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:29:28.655077   23455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 04:29:28.656598   23455 out.go:298] Setting JSON to true
	I0415 04:29:28.678490   23455 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5339,"bootTime":1713175229,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 04:29:28.678575   23455 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:29:28.700387   23455 out.go:97] [download-only-076000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 04:29:28.723055   23455 out.go:169] MINIKUBE_LOCATION=18644
	I0415 04:29:28.700619   23455 notify.go:220] Checking for updates...
	I0415 04:29:28.767231   23455 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 04:29:28.795188   23455 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 04:29:28.815857   23455 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:29:28.836988   23455 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	W0415 04:29:28.879731   23455 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 04:29:28.880251   23455 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:29:28.935192   23455 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 04:29:28.935349   23455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 04:29:29.044409   23455 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-15 11:29:29.03465565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-
g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 04:29:29.066266   23455 out.go:97] Using the docker driver based on user configuration
	I0415 04:29:29.066308   23455 start.go:297] selected driver: docker
	I0415 04:29:29.066325   23455 start.go:901] validating driver "docker" against <nil>
	I0415 04:29:29.066562   23455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 04:29:29.177552   23455 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:98 SystemTime:2024-04-15 11:29:29.16795435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-
g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 04:29:29.177752   23455 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 04:29:29.180650   23455 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0415 04:29:29.180797   23455 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 04:29:29.202204   23455 out.go:169] Using Docker Desktop driver with root privileges
	I0415 04:29:29.223286   23455 cni.go:84] Creating CNI manager for ""
	I0415 04:29:29.223331   23455 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 04:29:29.223349   23455 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 04:29:29.223476   23455 start.go:340] cluster config:
	{Name:download-only-076000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-076000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:29:29.245260   23455 out.go:97] Starting "download-only-076000" primary control-plane node in "download-only-076000" cluster
	I0415 04:29:29.245307   23455 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 04:29:29.265957   23455 out.go:97] Pulling base image v0.0.43-1712854342-18621 ...
	I0415 04:29:29.266087   23455 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 04:29:29.266165   23455 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local docker daemon
	I0415 04:29:29.315591   23455 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f to local cache
	I0415 04:29:29.315822   23455 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory
	I0415 04:29:29.315845   23455 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f in local cache directory, skipping pull
	I0415 04:29:29.315851   23455 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f exists in cache, skipping pull
	I0415 04:29:29.315864   23455 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f as a tarball
	I0415 04:29:29.317117   23455 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 04:29:29.317128   23455 cache.go:56] Caching tarball of preloaded images
	I0415 04:29:29.337456   23455 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 04:29:29.359187   23455 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 04:29:29.359242   23455 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 04:29:29.445139   23455 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:9834337eee074d8b5e25932a2917a549 -> /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0415 04:29:33.210678   23455 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 04:29:33.210875   23455 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0415 04:29:33.695178   23455 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on docker
	I0415 04:29:33.695478   23455 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/download-only-076000/config.json ...
	I0415 04:29:33.695503   23455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/download-only-076000/config.json: {Name:mk955da96a885d0c8e3bc67e8e152ba4e74b2c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 04:29:33.695889   23455 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 04:29:33.696189   23455 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18644-22866/.minikube/cache/darwin/amd64/v1.30.0-rc.2/kubectl
	
	
	* The control-plane node download-only-076000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-076000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-076000
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.88s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-144000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-144000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-144000
--- PASS: TestDownloadOnlyKic (1.88s)

                                                
                                    
x
+
TestBinaryMirror (1.66s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-082000 --alsologtostderr --binary-mirror http://127.0.0.1:55570 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-082000 --alsologtostderr --binary-mirror http://127.0.0.1:55570 --driver=docker : (1.053689288s)
helpers_test.go:175: Cleaning up "binary-mirror-082000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-082000
--- PASS: TestBinaryMirror (1.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-635000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-635000: exit status 85 (194.979478ms)

                                                
                                                
-- stdout --
	* Profile "addons-635000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-635000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-635000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-635000: exit status 85 (215.810269ms)

                                                
                                                
-- stdout --
	* Profile "addons-635000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-635000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (290.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-635000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-635000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m50.427419125s)
--- PASS: TestAddons/Setup (290.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ll64b" [df367c17-be2f-4744-8af3-fccd44c1b4cf] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005056432s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-635000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-635000: (5.835863449s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.695588ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-k597k" [b5f3e99a-f727-4fa9-9664-4db041745c3a] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005153296s
addons_test.go:415: (dbg) Run:  kubectl --context addons-635000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-635000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.226904ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-qb9j5" [97da339d-73d4-4a9c-aa1d-7c6cb83dd1e9] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005929628s
addons_test.go:473: (dbg) Run:  kubectl --context addons-635000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-635000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.126787999s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-635000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 16.195343ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-635000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-635000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [42072b76-744b-4f6d-81db-cf3c90018fff] Pending
helpers_test.go:344: "task-pv-pod" [42072b76-744b-4f6d-81db-cf3c90018fff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [42072b76-744b-4f6d-81db-cf3c90018fff] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004216133s
addons_test.go:584: (dbg) Run:  kubectl --context addons-635000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-635000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-635000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-635000 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-635000 delete pod task-pv-pod: (1.168118741s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-635000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-635000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-635000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c1b8467c-908e-498b-a10e-942e9828ad35] Pending
helpers_test.go:344: "task-pv-pod-restore" [c1b8467c-908e-498b-a10e-942e9828ad35] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c1b8467c-908e-498b-a10e-942e9828ad35] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006349384s
addons_test.go:626: (dbg) Run:  kubectl --context addons-635000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-635000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-635000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-635000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-635000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.729280261s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-635000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-635000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-635000 --alsologtostderr -v=1: (1.229445482s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-mnjv5" [f7d17e66-038f-4413-ae62-72d247342483] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-mnjv5" [f7d17e66-038f-4413-ae62-72d247342483] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004882042s
--- PASS: TestAddons/parallel/Headlamp (12.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-9jvc5" [1812659c-4de7-4dda-a389-f5e46918ca48] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00323908s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-635000
--- PASS: TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-635000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-635000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [be62c064-ed62-45bc-ad9b-508150bd1d0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [be62c064-ed62-45bc-ad9b-508150bd1d0c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [be62c064-ed62-45bc-ad9b-508150bd1d0c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00462247s
addons_test.go:891: (dbg) Run:  kubectl --context addons-635000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-635000 ssh "cat /opt/local-path-provisioner/pvc-bb58991f-634e-4031-ae1d-e60513253c7c_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-635000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-635000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-635000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-635000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.097802725s)
--- PASS: TestAddons/parallel/LocalPath (57.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-m2nhw" [620649df-055d-4aab-b1b4-952e9a4874bc] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004749607s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-635000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-rf564" [290d0258-bf6d-4410-a3ad-5b20eb5241b8] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004975715s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-635000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-635000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.68s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-635000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-635000: (10.960295078s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-635000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-635000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-635000
--- PASS: TestAddons/StoppedEnableDisable (11.68s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.6s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.60s)

                                                
                                    
x
+
TestErrorSpam/setup (19.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-718000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-718000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 --driver=docker : (19.370716373s)
--- PASS: TestErrorSpam/setup (19.37s)

                                                
                                    
x
+
TestErrorSpam/start (2.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 start --dry-run
--- PASS: TestErrorSpam/start (2.37s)

                                                
                                    
x
+
TestErrorSpam/status (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 status
--- PASS: TestErrorSpam/status (1.23s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (11.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 stop: (10.779676942s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-718000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-718000 stop
--- PASS: TestErrorSpam/stop (11.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18644-22866/.minikube/files/etc/test/nested/copy/23318/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-032000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-032000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.656786102s)
--- PASS: TestFunctional/serial/StartWithProxy (37.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-032000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-032000 --alsologtostderr -v=8: (34.872660286s)
functional_test.go:659: soft start took 34.873131823s for "functional-032000" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-032000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 cache add registry.k8s.io/pause:3.1: (1.182797303s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 cache add registry.k8s.io/pause:3.3: (1.172923626s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 cache add registry.k8s.io/pause:latest: (1.098590327s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2591891214/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 cache add minikube-local-cache-test:functional-032000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 cache add minikube-local-cache-test:functional-032000: (1.568538101s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 cache delete minikube-local-cache-test:functional-032000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-032000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (394.347922ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 kubectl -- --context functional-032000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-032000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-032000 get pods: (1.316332658s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.32s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-032000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-032000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.526484284s)
functional_test.go:757: restart took 40.526632225s for "functional-032000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-032000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 logs: (3.064547746s)
--- PASS: TestFunctional/serial/LogsCmd (3.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd601644785/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd601644785/001/logs.txt: (3.180019288s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.18s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-032000 apply -f testdata/invalidsvc.yaml
E0415 04:39:41.417646   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:39:41.423987   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:39:41.434085   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:39:41.454382   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:39:41.495215   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:39:41.576031   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:39:41.736463   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:39:42.058264   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:39:42.699005   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:39:43.979201   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-032000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-032000: exit status 115 (555.346359ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31286 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-032000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 config get cpus: exit status 14 (113.769367ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 config get cpus: exit status 14 (79.535871ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-032000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-032000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25723: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-032000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-032000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (936.943712ms)

                                                
                                                
-- stdout --
	* [functional-032000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:41:12.297998   25610 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:41:12.298629   25610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:41:12.298644   25610 out.go:304] Setting ErrFile to fd 2...
	I0415 04:41:12.298654   25610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:41:12.299038   25610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 04:41:12.341088   25610 out.go:298] Setting JSON to false
	I0415 04:41:12.366092   25610 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6043,"bootTime":1713175229,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 04:41:12.366170   25610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:41:12.406047   25610 out.go:177] * [functional-032000] minikube v1.33.0-beta.0 on Darwin 14.4.1
	I0415 04:41:12.452568   25610 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 04:41:12.433146   25610 notify.go:220] Checking for updates...
	I0415 04:41:12.494408   25610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 04:41:12.536507   25610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 04:41:12.557350   25610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:41:12.599457   25610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	I0415 04:41:12.641523   25610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 04:41:12.662819   25610 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:41:12.663286   25610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:41:12.823962   25610 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 04:41:12.824132   25610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 04:41:12.941333   25610 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:false NGoroutines:103 SystemTime:2024-04-15 11:41:12.930615043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 04:41:12.964182   25610 out.go:177] * Using the docker driver based on existing profile
	I0415 04:41:13.005255   25610 start.go:297] selected driver: docker
	I0415 04:41:13.005274   25610 start.go:901] validating driver "docker" against &{Name:functional-032000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-032000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:41:13.005354   25610 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 04:41:13.049454   25610 out.go:177] 
	W0415 04:41:13.070288   25610 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0415 04:41:13.091279   25610 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-032000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-032000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-032000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (835.348922ms)

                                                
                                                
-- stdout --
	* [functional-032000] minikube v1.33.0-beta.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18644
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:41:13.986255   25692 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:41:13.986437   25692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:41:13.986441   25692 out.go:304] Setting ErrFile to fd 2...
	I0415 04:41:13.986445   25692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:41:13.986792   25692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 04:41:13.988772   25692 out.go:298] Setting JSON to false
	I0415 04:41:14.011969   25692 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6044,"bootTime":1713175229,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0415 04:41:14.012069   25692 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0415 04:41:14.034327   25692 out.go:177] * [functional-032000] minikube v1.33.0-beta.0 sur Darwin 14.4.1
	I0415 04:41:14.076047   25692 out.go:177]   - MINIKUBE_LOCATION=18644
	I0415 04:41:14.054926   25692 notify.go:220] Checking for updates...
	I0415 04:41:14.117698   25692 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
	I0415 04:41:14.159934   25692 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0415 04:41:14.201918   25692 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 04:41:14.243720   25692 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube
	I0415 04:41:14.285954   25692 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 04:41:14.307030   25692 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:41:14.307449   25692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 04:41:14.441512   25692 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0415 04:41:14.441683   25692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 04:41:14.557482   25692 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:false NGoroutines:103 SystemTime:2024-04-15 11:41:14.5467025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:22 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211072000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-
g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0415 04:41:14.600792   25692 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0415 04:41:14.622095   25692 start.go:297] selected driver: docker
	I0415 04:41:14.622117   25692 start.go:901] validating driver "docker" against &{Name:functional-032000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712854342-18621@sha256:ed83a14d1540ae575c52399493a92b74b64f457445525b45c4b55f3ec4ca873f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-032000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 04:41:14.622200   25692 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 04:41:14.647222   25692 out.go:177] 
	W0415 04:41:14.668823   25692 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0415 04:41:14.690103   25692 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [682870ca-ae0d-47b3-8d21-16cbcffb306b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005601115s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-032000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-032000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-032000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-032000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2f4283d7-d70d-4b33-97a7-cbb11fe30f31] Pending
helpers_test.go:344: "sp-pod" [2f4283d7-d70d-4b33-97a7-cbb11fe30f31] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2f4283d7-d70d-4b33-97a7-cbb11fe30f31] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.006941702s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-032000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-032000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-032000 delete -f testdata/storage-provisioner/pod.yaml: (1.032829584s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-032000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cefdc606-a197-457b-b771-18242b579957] Pending
helpers_test.go:344: "sp-pod" [cefdc606-a197-457b-b771-18242b579957] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cefdc606-a197-457b-b771-18242b579957] Running
E0415 04:41:03.341206   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004512326s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-032000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh -n functional-032000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 cp functional-032000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd107618537/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh -n functional-032000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh -n functional-032000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-032000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-26q2q" [13b246f8-2b82-43a7-8f26-668164e9e200] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-26q2q" [13b246f8-2b82-43a7-8f26-668164e9e200] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.005138236s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-032000 exec mysql-859648c796-26q2q -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-032000 exec mysql-859648c796-26q2q -- mysql -ppassword -e "show databases;": exit status 1 (115.281311ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-032000 exec mysql-859648c796-26q2q -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-032000 exec mysql-859648c796-26q2q -- mysql -ppassword -e "show databases;": exit status 1 (109.045492ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0415 04:40:22.381546   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-032000 exec mysql-859648c796-26q2q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/23318/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo cat /etc/test/nested/copy/23318/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/23318.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo cat /etc/ssl/certs/23318.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/23318.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo cat /usr/share/ca-certificates/23318.pem"
E0415 04:39:51.660671   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/233182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo cat /etc/ssl/certs/233182.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/233182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo cat /usr/share/ca-certificates/233182.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-032000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 ssh "sudo systemctl is-active crio": exit status 1 (450.328919ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
E0415 04:39:46.539432   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/License (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 version -o=json --components: (1.063525766s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-032000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-032000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-032000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-032000 image ls --format short --alsologtostderr:
I0415 04:41:27.505494   25956 out.go:291] Setting OutFile to fd 1 ...
I0415 04:41:27.505684   25956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:27.505690   25956 out.go:304] Setting ErrFile to fd 2...
I0415 04:41:27.505693   25956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:27.505892   25956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
I0415 04:41:27.506505   25956 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:27.506598   25956 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:27.506972   25956 cli_runner.go:164] Run: docker container inspect functional-032000 --format={{.State.Status}}
I0415 04:41:27.558302   25956 ssh_runner.go:195] Run: systemctl --version
I0415 04:41:27.558374   25956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032000
I0415 04:41:27.609479   25956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56343 SSHKeyPath:/Users/jenkins/minikube-integration/18644-22866/.minikube/machines/functional-032000/id_rsa Username:docker}
I0415 04:41:27.707736   25956 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-032000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-032000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-032000 | 9d4f276cc9b20 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.29.3           | 39f995c9f1996 | 127MB  |
| registry.k8s.io/kube-scheduler              | v1.29.3           | 8c390d98f50c0 | 59.6MB |
| docker.io/library/nginx                     | latest            | c613f16b66424 | 187MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-proxy                  | v1.29.3           | a1d263b5dc5b0 | 82.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/localhost/my-image                | functional-032000 | 05c0bb7592103 | 1.24MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.29.3           | 6052a25da3f97 | 122MB  |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-032000 image ls --format table --alsologtostderr:
I0415 04:41:30.515097   25991 out.go:291] Setting OutFile to fd 1 ...
I0415 04:41:30.515305   25991 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:30.515310   25991 out.go:304] Setting ErrFile to fd 2...
I0415 04:41:30.515314   25991 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:30.515512   25991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
I0415 04:41:30.516149   25991 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:30.516241   25991 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:30.517802   25991 cli_runner.go:164] Run: docker container inspect functional-032000 --format={{.State.Status}}
I0415 04:41:30.573268   25991 ssh_runner.go:195] Run: systemctl --version
I0415 04:41:30.573348   25991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032000
I0415 04:41:30.626667   25991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56343 SSHKeyPath:/Users/jenkins/minikube-integration/18644-22866/.minikube/machines/functional-032000/id_rsa Username:docker}
I0415 04:41:30.724300   25991 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-032000 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"59600000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d4449879
19323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"122000000"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"82400000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59
800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-032000"],"size":"32900000"},{"id":"e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"9d4f276cc9b20940b2fc0c0a22c059b003149d20df9c95dd7cdbb8d144046047","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-032000"],"
size":"30"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"127000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-032000 image ls --format json --alsologtostderr:
I0415 04:41:30.195810   25985 out.go:291] Setting OutFile to fd 1 ...
I0415 04:41:30.196078   25985 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:30.196083   25985 out.go:304] Setting ErrFile to fd 2...
I0415 04:41:30.196086   25985 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:30.196273   25985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
I0415 04:41:30.196848   25985 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:30.196955   25985 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:30.197333   25985 cli_runner.go:164] Run: docker container inspect functional-032000 --format={{.State.Status}}
I0415 04:41:30.247467   25985 ssh_runner.go:195] Run: systemctl --version
I0415 04:41:30.247537   25985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032000
I0415 04:41:30.302238   25985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56343 SSHKeyPath:/Users/jenkins/minikube-integration/18644-22866/.minikube/machines/functional-032000/id_rsa Username:docker}
I0415 04:41:30.395944   25985 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-032000 image ls --format yaml --alsologtostderr:
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "82400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "127000000"
- id: c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-032000
size: "32900000"
- id: 9d4f276cc9b20940b2fc0c0a22c059b003149d20df9c95dd7cdbb8d144046047
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-032000
size: "30"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "122000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59600000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-032000 image ls --format yaml --alsologtostderr:
I0415 04:41:27.817436   25962 out.go:291] Setting OutFile to fd 1 ...
I0415 04:41:27.818135   25962 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:27.818144   25962 out.go:304] Setting ErrFile to fd 2...
I0415 04:41:27.818150   25962 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:27.818634   25962 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
I0415 04:41:27.819252   25962 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:27.819346   25962 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:27.819713   25962 cli_runner.go:164] Run: docker container inspect functional-032000 --format={{.State.Status}}
I0415 04:41:27.870061   25962 ssh_runner.go:195] Run: systemctl --version
I0415 04:41:27.870132   25962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032000
I0415 04:41:27.921111   25962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56343 SSHKeyPath:/Users/jenkins/minikube-integration/18644-22866/.minikube/machines/functional-032000/id_rsa Username:docker}
I0415 04:41:28.017369   25962 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 ssh pgrep buildkitd: exit status 1 (373.406845ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image build -t localhost/my-image:functional-032000 testdata/build --alsologtostderr
2024/04/15 04:41:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 image build -t localhost/my-image:functional-032000 testdata/build --alsologtostderr: (2.273006056s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-032000 image build -t localhost/my-image:functional-032000 testdata/build --alsologtostderr:
I0415 04:41:28.501600   25978 out.go:291] Setting OutFile to fd 1 ...
I0415 04:41:28.501794   25978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:28.501800   25978 out.go:304] Setting ErrFile to fd 2...
I0415 04:41:28.501803   25978 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 04:41:28.501977   25978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
I0415 04:41:28.502577   25978 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:28.503262   25978 config.go:182] Loaded profile config "functional-032000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 04:41:28.503680   25978 cli_runner.go:164] Run: docker container inspect functional-032000 --format={{.State.Status}}
I0415 04:41:28.554392   25978 ssh_runner.go:195] Run: systemctl --version
I0415 04:41:28.554463   25978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032000
I0415 04:41:28.604980   25978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56343 SSHKeyPath:/Users/jenkins/minikube-integration/18644-22866/.minikube/machines/functional-032000/id_rsa Username:docker}
I0415 04:41:28.700741   25978 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1807364264.tar
I0415 04:41:28.700826   25978 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0415 04:41:28.709181   25978 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1807364264.tar
I0415 04:41:28.713020   25978 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1807364264.tar: stat -c "%s %y" /var/lib/minikube/build/build.1807364264.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1807364264.tar': No such file or directory
I0415 04:41:28.713051   25978 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1807364264.tar --> /var/lib/minikube/build/build.1807364264.tar (3072 bytes)
I0415 04:41:28.734875   25978 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1807364264
I0415 04:41:28.743374   25978 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1807364264 -xf /var/lib/minikube/build/build.1807364264.tar
I0415 04:41:28.752401   25978 docker.go:360] Building image: /var/lib/minikube/build/build.1807364264
I0415 04:41:28.752478   25978 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-032000 /var/lib/minikube/build/build.1807364264
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:05c0bb759210318dbcb147ae76e36adab97364f2e061889dbff944c1b9cfec1d done
#8 naming to localhost/my-image:functional-032000 done
#8 DONE 0.0s
I0415 04:41:30.667345   25978 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-032000 /var/lib/minikube/build/build.1807364264: (1.91488133s)
I0415 04:41:30.667447   25978 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1807364264
I0415 04:41:30.676676   25978 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1807364264.tar
I0415 04:41:30.685087   25978 build_images.go:217] Built localhost/my-image:functional-032000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1807364264.tar
I0415 04:41:30.685113   25978 build_images.go:133] succeeded building to: functional-032000
I0415 04:41:30.685117   25978 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.13914283s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-032000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-032000 docker-env) && out/minikube-darwin-amd64 status -p functional-032000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-032000 docker-env) && out/minikube-darwin-amd64 status -p functional-032000": (1.290682917s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-032000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image load --daemon gcr.io/google-containers/addon-resizer:functional-032000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 image load --daemon gcr.io/google-containers/addon-resizer:functional-032000 --alsologtostderr: (3.753603943s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image load --daemon gcr.io/google-containers/addon-resizer:functional-032000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 image load --daemon gcr.io/google-containers/addon-resizer:functional-032000 --alsologtostderr: (2.143429605s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.947269745s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-032000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image load --daemon gcr.io/google-containers/addon-resizer:functional-032000 --alsologtostderr
E0415 04:40:01.901370   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 image load --daemon gcr.io/google-containers/addon-resizer:functional-032000 --alsologtostderr: (3.776935024s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image save gcr.io/google-containers/addon-resizer:functional-032000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 image save gcr.io/google-containers/addon-resizer:functional-032000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.645194262s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image rm gcr.io/google-containers/addon-resizer:functional-032000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.979081871s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-032000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 image save --daemon gcr.io/google-containers/addon-resizer:functional-032000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-032000 image save --daemon gcr.io/google-containers/addon-resizer:functional-032000 --alsologtostderr: (1.467105165s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-032000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-032000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-032000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-fnmf5" [3262032c-9ef8-4102-8a49-242f6edf584e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-fnmf5" [3262032c-9ef8-4102-8a49-242f6edf584e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.005512779s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 service list -o json
functional_test.go:1490: Took "466.092303ms" to run "out/minikube-darwin-amd64 -p functional-032000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 service --namespace=default --https --url hello-node: signal: killed (15.004610675s)

                                                
                                                
-- stdout --
	https://127.0.0.1:56592

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:56592
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-032000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-032000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-032000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-032000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 25412: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-032000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-032000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1f92e684-aaaf-4c1c-aec5-f48f6ea5f56f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1f92e684-aaaf-4c1c-aec5-f48f6ea5f56f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.005113551s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-032000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-032000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 25449: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 service hello-node --url --format={{.IP}}: signal: killed (15.003509472s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 service hello-node --url: signal: killed (15.003277017s)

                                                
                                                
-- stdout --
	http://127.0.0.1:56659

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:56659
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "455.67467ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "87.467911ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "457.127332ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "86.588676ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3459847988/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713181271694391000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3459847988/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713181271694391000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3459847988/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713181271694391000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3459847988/001/test-1713181271694391000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (399.622198ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 15 11:41 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 15 11:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 15 11:41 test-1713181271694391000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh cat /mount-9p/test-1713181271694391000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-032000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [207fea54-738b-41a6-be3d-718843022c04] Pending
helpers_test.go:344: "busybox-mount" [207fea54-738b-41a6-be3d-718843022c04] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [207fea54-738b-41a6-be3d-718843022c04] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [207fea54-738b-41a6-be3d-718843022c04] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004028151s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-032000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3459847988/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1633183642/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (429.215513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1633183642/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 ssh "sudo umount -f /mount-9p": exit status 1 (389.21616ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-032000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1633183642/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1697276547/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1697276547/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1697276547/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T" /mount1: exit status 1 (581.107518ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-032000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-032000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1697276547/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1697276547/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-032000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1697276547/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.98s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-032000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-032000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-032000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (104.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-917000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0415 04:42:25.260335   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-917000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m43.273442369s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr: (1.131414803s)
--- PASS: TestMultiControlPlane/serial/StartCluster (104.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-917000 -- rollout status deployment/busybox: (2.943135875s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-4nj22 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-dsw6k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-qh6qz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-4nj22 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-dsw6k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-qh6qz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-4nj22 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-dsw6k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-qh6qz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-4nj22 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-4nj22 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-dsw6k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-dsw6k -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-qh6qz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-917000 -- exec busybox-7fdf7869d9-qh6qz -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-917000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-917000 -v=7 --alsologtostderr: (18.243117802s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr: (1.387074429s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-917000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.140560426s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (25.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 status --output json -v=7 --alsologtostderr: (1.4073485s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp testdata/cp-test.txt ha-917000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile328060737/001/cp-test_ha-917000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000:/home/docker/cp-test.txt ha-917000-m02:/home/docker/cp-test_ha-917000_ha-917000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m02 "sudo cat /home/docker/cp-test_ha-917000_ha-917000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000:/home/docker/cp-test.txt ha-917000-m03:/home/docker/cp-test_ha-917000_ha-917000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m03 "sudo cat /home/docker/cp-test_ha-917000_ha-917000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000:/home/docker/cp-test.txt ha-917000-m04:/home/docker/cp-test_ha-917000_ha-917000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m04 "sudo cat /home/docker/cp-test_ha-917000_ha-917000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp testdata/cp-test.txt ha-917000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile328060737/001/cp-test_ha-917000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m02:/home/docker/cp-test.txt ha-917000:/home/docker/cp-test_ha-917000-m02_ha-917000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000 "sudo cat /home/docker/cp-test_ha-917000-m02_ha-917000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m02:/home/docker/cp-test.txt ha-917000-m03:/home/docker/cp-test_ha-917000-m02_ha-917000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m03 "sudo cat /home/docker/cp-test_ha-917000-m02_ha-917000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m02:/home/docker/cp-test.txt ha-917000-m04:/home/docker/cp-test_ha-917000-m02_ha-917000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m04 "sudo cat /home/docker/cp-test_ha-917000-m02_ha-917000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp testdata/cp-test.txt ha-917000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile328060737/001/cp-test_ha-917000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m03:/home/docker/cp-test.txt ha-917000:/home/docker/cp-test_ha-917000-m03_ha-917000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000 "sudo cat /home/docker/cp-test_ha-917000-m03_ha-917000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m03:/home/docker/cp-test.txt ha-917000-m02:/home/docker/cp-test_ha-917000-m03_ha-917000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m02 "sudo cat /home/docker/cp-test_ha-917000-m03_ha-917000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m03:/home/docker/cp-test.txt ha-917000-m04:/home/docker/cp-test_ha-917000-m03_ha-917000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m04 "sudo cat /home/docker/cp-test_ha-917000-m03_ha-917000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp testdata/cp-test.txt ha-917000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile328060737/001/cp-test_ha-917000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m04:/home/docker/cp-test.txt ha-917000:/home/docker/cp-test_ha-917000-m04_ha-917000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000 "sudo cat /home/docker/cp-test_ha-917000-m04_ha-917000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m04:/home/docker/cp-test.txt ha-917000-m02:/home/docker/cp-test_ha-917000-m04_ha-917000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m02 "sudo cat /home/docker/cp-test_ha-917000-m04_ha-917000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 cp ha-917000-m04:/home/docker/cp-test.txt ha-917000-m03:/home/docker/cp-test_ha-917000-m04_ha-917000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 ssh -n ha-917000-m03 "sudo cat /home/docker/cp-test_ha-917000-m04_ha-917000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (25.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 node stop m02 -v=7 --alsologtostderr: (10.862395797s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr: exit status 7 (1.080366906s)

                                                
                                                
-- stdout --
	ha-917000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-917000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-917000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:44:23.079252   27229 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:44:23.079457   27229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:44:23.079463   27229 out.go:304] Setting ErrFile to fd 2...
	I0415 04:44:23.079467   27229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:44:23.080395   27229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 04:44:23.080955   27229 out.go:298] Setting JSON to false
	I0415 04:44:23.080984   27229 mustload.go:65] Loading cluster: ha-917000
	I0415 04:44:23.081018   27229 notify.go:220] Checking for updates...
	I0415 04:44:23.081294   27229 config.go:182] Loaded profile config "ha-917000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:44:23.081309   27229 status.go:255] checking status of ha-917000 ...
	I0415 04:44:23.081758   27229 cli_runner.go:164] Run: docker container inspect ha-917000 --format={{.State.Status}}
	I0415 04:44:23.134414   27229 status.go:330] ha-917000 host status = "Running" (err=<nil>)
	I0415 04:44:23.134444   27229 host.go:66] Checking if "ha-917000" exists ...
	I0415 04:44:23.134705   27229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917000
	I0415 04:44:23.185940   27229 host.go:66] Checking if "ha-917000" exists ...
	I0415 04:44:23.186252   27229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 04:44:23.186312   27229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917000
	I0415 04:44:23.238479   27229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56804 SSHKeyPath:/Users/jenkins/minikube-integration/18644-22866/.minikube/machines/ha-917000/id_rsa Username:docker}
	I0415 04:44:23.335249   27229 ssh_runner.go:195] Run: systemctl --version
	I0415 04:44:23.339669   27229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 04:44:23.349982   27229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-917000
	I0415 04:44:23.400912   27229 kubeconfig.go:125] found "ha-917000" server: "https://127.0.0.1:56803"
	I0415 04:44:23.400946   27229 api_server.go:166] Checking apiserver status ...
	I0415 04:44:23.400985   27229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 04:44:23.411736   27229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2133/cgroup
	W0415 04:44:23.420794   27229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2133/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 04:44:23.420862   27229 ssh_runner.go:195] Run: ls
	I0415 04:44:23.424688   27229 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56803/healthz ...
	I0415 04:44:23.429808   27229 api_server.go:279] https://127.0.0.1:56803/healthz returned 200:
	ok
	I0415 04:44:23.429822   27229 status.go:422] ha-917000 apiserver status = Running (err=<nil>)
	I0415 04:44:23.429834   27229 status.go:257] ha-917000 status: &{Name:ha-917000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 04:44:23.429845   27229 status.go:255] checking status of ha-917000-m02 ...
	I0415 04:44:23.430099   27229 cli_runner.go:164] Run: docker container inspect ha-917000-m02 --format={{.State.Status}}
	I0415 04:44:23.480690   27229 status.go:330] ha-917000-m02 host status = "Stopped" (err=<nil>)
	I0415 04:44:23.480713   27229 status.go:343] host is not running, skipping remaining checks
	I0415 04:44:23.480723   27229 status.go:257] ha-917000-m02 status: &{Name:ha-917000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 04:44:23.480734   27229 status.go:255] checking status of ha-917000-m03 ...
	I0415 04:44:23.481018   27229 cli_runner.go:164] Run: docker container inspect ha-917000-m03 --format={{.State.Status}}
	I0415 04:44:23.532162   27229 status.go:330] ha-917000-m03 host status = "Running" (err=<nil>)
	I0415 04:44:23.532188   27229 host.go:66] Checking if "ha-917000-m03" exists ...
	I0415 04:44:23.532462   27229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917000-m03
	I0415 04:44:23.583478   27229 host.go:66] Checking if "ha-917000-m03" exists ...
	I0415 04:44:23.583783   27229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 04:44:23.583833   27229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917000-m03
	I0415 04:44:23.636349   27229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:/Users/jenkins/minikube-integration/18644-22866/.minikube/machines/ha-917000-m03/id_rsa Username:docker}
	I0415 04:44:23.730181   27229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 04:44:23.742685   27229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-917000
	I0415 04:44:23.800822   27229 kubeconfig.go:125] found "ha-917000" server: "https://127.0.0.1:56803"
	I0415 04:44:23.800845   27229 api_server.go:166] Checking apiserver status ...
	I0415 04:44:23.800887   27229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 04:44:23.811585   27229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2045/cgroup
	W0415 04:44:23.820985   27229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2045/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0415 04:44:23.821051   27229 ssh_runner.go:195] Run: ls
	I0415 04:44:23.824990   27229 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56803/healthz ...
	I0415 04:44:23.828873   27229 api_server.go:279] https://127.0.0.1:56803/healthz returned 200:
	ok
	I0415 04:44:23.828916   27229 status.go:422] ha-917000-m03 apiserver status = Running (err=<nil>)
	I0415 04:44:23.828926   27229 status.go:257] ha-917000-m03 status: &{Name:ha-917000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 04:44:23.828936   27229 status.go:255] checking status of ha-917000-m04 ...
	I0415 04:44:23.829192   27229 cli_runner.go:164] Run: docker container inspect ha-917000-m04 --format={{.State.Status}}
	I0415 04:44:23.880475   27229 status.go:330] ha-917000-m04 host status = "Running" (err=<nil>)
	I0415 04:44:23.880514   27229 host.go:66] Checking if "ha-917000-m04" exists ...
	I0415 04:44:23.880791   27229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-917000-m04
	I0415 04:44:23.933985   27229 host.go:66] Checking if "ha-917000-m04" exists ...
	I0415 04:44:23.934253   27229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 04:44:23.934304   27229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-917000-m04
	I0415 04:44:23.986466   27229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57027 SSHKeyPath:/Users/jenkins/minikube-integration/18644-22866/.minikube/machines/ha-917000-m04/id_rsa Username:docker}
	I0415 04:44:24.083479   27229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 04:44:24.094724   27229 status.go:257] ha-917000-m04 status: &{Name:ha-917000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (67.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 node start m02 -v=7 --alsologtostderr
E0415 04:44:41.413118   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:44:53.983857   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:53.989526   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:54.001608   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:54.022543   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:54.062953   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:54.143525   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:54.303866   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:54.624089   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:55.264515   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:56.545211   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:44:59.105660   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:45:04.226426   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:45:09.098393   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:45:14.467056   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 node start m02 -v=7 --alsologtostderr: (1m6.441987158s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr: (1.386141627s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (67.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.13190321s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (210.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-917000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-917000 -v=7 --alsologtostderr
E0415 04:45:34.946978   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-917000 -v=7 --alsologtostderr: (34.247485402s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-917000 --wait=true -v=7 --alsologtostderr
E0415 04:46:15.907111   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:47:37.826766   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-917000 --wait=true -v=7 --alsologtostderr: (2m56.249420787s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-917000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (210.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 node delete m03 -v=7 --alsologtostderr: (10.985916826s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr: (1.034402144s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 stop -v=7 --alsologtostderr
E0415 04:49:41.408621   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 stop -v=7 --alsologtostderr: (32.796202917s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr: exit status 7 (213.594442ms)

                                                
                                                
-- stdout --
	ha-917000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-917000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 04:49:50.485232   27892 out.go:291] Setting OutFile to fd 1 ...
	I0415 04:49:50.485443   27892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:49:50.485449   27892 out.go:304] Setting ErrFile to fd 2...
	I0415 04:49:50.485452   27892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 04:49:50.485633   27892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18644-22866/.minikube/bin
	I0415 04:49:50.485812   27892 out.go:298] Setting JSON to false
	I0415 04:49:50.485836   27892 mustload.go:65] Loading cluster: ha-917000
	I0415 04:49:50.485880   27892 notify.go:220] Checking for updates...
	I0415 04:49:50.486135   27892 config.go:182] Loaded profile config "ha-917000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 04:49:50.486151   27892 status.go:255] checking status of ha-917000 ...
	I0415 04:49:50.486539   27892 cli_runner.go:164] Run: docker container inspect ha-917000 --format={{.State.Status}}
	I0415 04:49:50.536269   27892 status.go:330] ha-917000 host status = "Stopped" (err=<nil>)
	I0415 04:49:50.536291   27892 status.go:343] host is not running, skipping remaining checks
	I0415 04:49:50.536297   27892 status.go:257] ha-917000 status: &{Name:ha-917000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 04:49:50.536314   27892 status.go:255] checking status of ha-917000-m02 ...
	I0415 04:49:50.536548   27892 cli_runner.go:164] Run: docker container inspect ha-917000-m02 --format={{.State.Status}}
	I0415 04:49:50.585765   27892 status.go:330] ha-917000-m02 host status = "Stopped" (err=<nil>)
	I0415 04:49:50.585806   27892 status.go:343] host is not running, skipping remaining checks
	I0415 04:49:50.585818   27892 status.go:257] ha-917000-m02 status: &{Name:ha-917000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 04:49:50.585840   27892 status.go:255] checking status of ha-917000-m04 ...
	I0415 04:49:50.586120   27892 cli_runner.go:164] Run: docker container inspect ha-917000-m04 --format={{.State.Status}}
	I0415 04:49:50.635102   27892 status.go:330] ha-917000-m04 host status = "Stopped" (err=<nil>)
	I0415 04:49:50.635127   27892 status.go:343] host is not running, skipping remaining checks
	I0415 04:49:50.635135   27892 status.go:257] ha-917000-m04 status: &{Name:ha-917000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-917000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0415 04:49:53.980557   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
E0415 04:50:21.666411   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-917000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m37.952498028s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr: (1.10426881s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-917000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-917000 --control-plane -v=7 --alsologtostderr: (38.469237997s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-917000 status -v=7 --alsologtostderr: (1.382452672s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.274937737s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.28s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-593000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-593000 --driver=docker : (21.338156658s)
--- PASS: TestImageBuild/serial/Setup (21.34s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-593000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-593000: (1.906395867s)
--- PASS: TestImageBuild/serial/NormalBuild (1.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-593000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-593000: (1.098076813s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-593000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.84s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-593000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-644000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-644000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m14.85400787s)
--- PASS: TestJSONOutput/start/Command (74.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-644000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-644000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-644000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-644000 --output=json --user=testUser: (5.729124585s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-412000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-412000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (391.028859ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db053115-742e-4099-9231-84c5b14711a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-412000] minikube v1.33.0-beta.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"005517b6-381f-4628-9450-320553867b97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18644"}}
	{"specversion":"1.0","id":"9dafe851-a775-4699-a274-20501310c7a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig"}}
	{"specversion":"1.0","id":"78ab4356-bce1-4bf9-ba57-efcccba1d05f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"36ac3815-a276-4b49-95ef-c00bcb262406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0a8b0fd0-0dd3-452f-ab31-6c35ef8085c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18644-22866/.minikube"}}
	{"specversion":"1.0","id":"7900c73c-8562-478c-a463-ff02841809f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f5ec6cf-5bca-4a86-9d46-1628231f3bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-412000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-412000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-123000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-123000 --network=: (20.619864731s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-123000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-123000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-123000: (2.466122386s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.14s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-417000 --network=bridge
E0415 04:54:41.522905   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
E0415 04:54:54.092967   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-417000 --network=bridge: (20.892830643s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-417000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-417000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-417000: (2.243077114s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.19s)

                                                
                                    
x
+
TestKicExistingNetwork (23.13s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-375000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-375000 --network=existing-network: (20.522462621s)
helpers_test.go:175: Cleaning up "existing-network-375000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-375000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-375000: (2.220940567s)
--- PASS: TestKicExistingNetwork (23.13s)

                                                
                                    
x
+
TestKicCustomSubnet (23.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-064000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-064000 --subnet=192.168.60.0/24: (21.016385523s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-064000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-064000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-064000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-064000: (2.366119832s)
--- PASS: TestKicCustomSubnet (23.44s)

                                                
                                    
x
+
TestKicStaticIP (22.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-264000 --static-ip=192.168.200.200
E0415 04:56:04.570689   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-264000 --static-ip=192.168.200.200: (19.840398213s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-264000 ip
helpers_test.go:175: Cleaning up "static-ip-264000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-264000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-264000: (2.38087714s)
--- PASS: TestKicStaticIP (22.46s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (48.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-208000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-208000 --driver=docker : (20.860993254s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-210000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-210000 --driver=docker : (20.86274059s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-208000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-210000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-210000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-210000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-210000: (2.438450471s)
helpers_test.go:175: Cleaning up "first-208000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-208000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-208000: (2.402898961s)
--- PASS: TestMinikubeProfile (48.48s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-987000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-987000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.380580229s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-987000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-001000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-001000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.405088611s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-001000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-987000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-987000 --alsologtostderr -v=5: (2.116856528s)
--- PASS: TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                    
x
+
TestPreload (131.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-461000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0415 05:44:41.625507   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/addons-635000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-461000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m35.692002195s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-461000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-461000 image pull gcr.io/k8s-minikube/busybox: (1.387736514s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-461000
E0415 05:44:54.196929   23318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18644-22866/.minikube/profiles/functional-032000/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-461000: (10.852389634s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-461000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-461000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (20.692720591s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-461000 image list
helpers_test.go:175: Cleaning up "test-preload-461000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-461000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-461000: (2.44575152s)
--- PASS: TestPreload (131.42s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.7s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin
- MINIKUBE_LOCATION=18644
- KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3426427155/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3426427155/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3426427155/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3426427155/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.70s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin
- MINIKUBE_LOCATION=18644
- KUBECONFIG=/Users/jenkins/minikube-integration/18644-22866/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2255503276/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2255503276/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2255503276/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2255503276/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.02s)

                                                
                                    

Test skip (19/213)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 12.913533ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-bnpz6" [330d2a59-1afb-455e-9219-be09bc44ac53] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00428026s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9q49m" [32417c3d-0470-4410-9b74-8c2cd871f817] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005037115s
addons_test.go:340: (dbg) Run:  kubectl --context addons-635000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-635000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-635000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.604915316s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.68s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-635000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-635000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-635000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ffcf2967-76de-4935-9657-1b65e2e192f1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ffcf2967-76de-4935-9657-1b65e2e192f1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00435721s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-635000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.02s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-032000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-032000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-84lkw" [6fcfadba-5cc4-4946-86c8-7fad8243144c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-84lkw" [6fcfadba-5cc4-4946-86c8-7fad8243144c] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004772745s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard